You are reading the article Miniature Robots To Patrol The Pipe Network To Prevent Leaks updated in December 2023 on the website Kientrucdochoi.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Miniature Robots To Patrol The Pipe Network To Prevent Leaks
Robotic water pipes from ICAIR could protect billions of liters from leakingThe University of Sheffield’s Integrated Civil and Infrastructure Research Centre (ICAIR) is testing a new generation of subterranean robotic pipe patrollers. Pipebots are tiny, mobile robots with all-terrain legs and cameras for eyes. They are being created in coordination with the water sector to inspect pipes and detect flaws and cracks before they become leaks.
According to Ofwat, the economic regulator for the water industry, about three billion liters of water are lost through leaks every day in England and Wales hundreds of thousands of kilometers of water pipe. Miniature robots have now been created by engineers to patrol the pipe network, look for problems, and stop leaks. Without robotics, they claim that maintaining the network will be “impossible.” According to the water industry’s trade group Water UK, businesses are already “spending billions” in reducing leakage. However, a recent Ofwat assessment emphasized that water providers had not made enough investments. By not investing enough in upgrades, it cited a number of them as “letting down customers and the environment.” In response, Water UK stated that leakage had reached “its lowest level since privatization.” Leaks are a common and challenging issue: In the UK, millions of properties are supplied with water by hundreds of thousands of kilometers of pipe that are in various states of repair and age.
“We look after more than 8,500km (5,282 miles) of pipe in [this region] alone, but only approximately half the leaks in those pipes are visible, so it’s difficult to determine where [the rest] are,” said Colin Day of Essex and Suffolk Water. This year, the subject of wasted water has received a lot of attention. Following the summer drought, localized hosepipe prohibitions are still in effect for three firms, South East Water, South West Water, and Yorkshire Water, according to Water UK. Furthermore, according to Ofwat, 20% of consumers in England and Wales are having trouble paying their water bills in the current economic downturn. However, according to Ofwat, businesses have decreased leakage by an average of 6% over the past year. The sector has committed to the government’s goal of reducing water loss by half by 2050. Water UK acknowledged the need for improvement to “accelerate”. The newest technology, such as specialized in-pipe cameras, satellite imaging, thermal drone technology, high-tech probes, and artificial intelligence, are being used by our company.
Tethered robots are already used by some businesses to look inside inaccessible conduits. However, the majority of the network cannot yet be accessed without digging. Smaller, artificially intelligent machines can help in this situation. According to Prof. Kirill Horoshenkov, “Companies only react to defects at the present, not proactively.” Robots must be present in order for them to continuously gather data prior to the commencement of errors. Prof. Horoshenkov stated while holding the miniature robot in his hand: “With a microphone to hear the pipe, they move along it while taking images. They are made to determine whether a problem in the pipe is likely to occur or not.”
Prof. Netta Cohen, an expert in artificial intelligence at the University of Leeds, claims that communication is pipebots’ largest hurdle. “GPS is unavailable underground. So they will converse with one another nearby (through sound or internet).” She is working on a system with her coworkers in which a larger “mother” robot carries and launches a fleet of smaller robots.
You're reading Miniature Robots To Patrol The Pipe Network To Prevent Leaks
A Fleet Of Winged Underwater Robots Will Patrol The Seas For The Us Navy
The ocean is an information-rich environment, if sensors are present to read it. The United States Navy, as part of its continued mission to operate throughout the seas and secure its own freedom of movement, is turning to new robots to collect and share that information, operating invisibly under the surface.
In late July, the Navy awarded a contract worth up to $39.2 million to Teledyne Brown Engineering for these underwater sensing robots. Formally the program is for “littoral battlespace sensing-gliders,” or LBS-G, a clunky acronym rich with meaning.
Breaking down the jargon helps reveal what these craft will do, and where. “Littoral” is coastal, the spaces along major bodies of water that are of profound interest for human use, home to much boat traffic, and especially to any incursions that threaten activity on land. “Battlespace” is a messier term, but it is essentially how the military understands the factors of an environment— everything from the weather to the positions of vehicles to ambient electromagnetic interference—that might shape how fighting happens. Finally, “sensing-glider” captures the design of these winged torpedo-shaped robots, which propel themselves like planes under the surface.
[Related: The Royal Navy’s robotic sub will be a test bench under the sea ]
The robots selected for this program will be based on Teledyne’s existing Slocum glider. Depending on the battery, existing Slocum gliders can operate with a short range of 220 miles for 15 days, or a maximum range of 8,000 miles over 18 months. They can also travel on the surface of the sea, and from there upload sensor readings to Iridium communication satellites for dispersal.
Teledyne says this program is the first “Unmanned Underwater Vehicle (UUV) program chosen for full-rate production by the U.S. Navy,” and one of these gliders was already the centerpiece of an international incident. In 2023, a naval vessel in China’s military collected a Slocum glider in the waters of the South China Sea, before returning it to the US Navy a few days later.
A Slocum glider. Teledyne
For the LSB-G program, Navy specifications stipulate that the robot must be able to operate at a depth of 3,300 feet for up to 90 days. This means the robots can exist, in a range of conditions, as a useful yet expendable kind of weather station, checking in to inform the Navy as a whole about conditions under the sea, thanks to its suite of sensors.
Those sensors will read the electrical conductivity of the water, a dataset that can give the Navy information about how well certain sensors will work in the ocean. Conductivity is also useful to know for figuring out ballast requirements on a submarine, and better to have in hand before arriving.
These sensing-gliders will also check for temperature and depth, both of which inform underwater operations, and can scan for optical clarity, or how easy it is to use visual sensing beneath the waves.
[Related: Robots of the Deep Blue Yonder ]
With these sensors, the gliders can look for underwater naval mines, which are explosives that pose a threat to larger and crewed vessels. Knowing if mines are present, and where to avoid them if so, is vital information—the difference between safe passage and secure landings or watery graves.
The bots also provide a lower stakes way to perform some oceanic monitoring and surveillance. The ocean is vast, and while the Navy may be interested in all of it, there is a finite capacity to monitor the sea. Using robots expands the range and reach of this monitoring, and it’s less of a big deal if a robot is captured doing this work than it would be if a human crew was captured for doing the same.
More broadly, these robots are not just tools, but part of the Navy’s broader vision for an “ocean of things,” a sensor-rich sea where what can be known about conditions in the water is collected and shared with fleets in real time, or close to real time. Knowing the shape of water means knowing the shape of battles to come, and even knowing how and where to avoid battles that are set to go poorly.
How To Prevent Session Hijacking In Django?
Session hijacking or session forging is another security issue that most websites are prone to. In this article, we will know more about the attack and how to protect your website against it.
This is a wide class of attacks on a user’s session data, rather than a specific assault. It has many forms and they are discussed below.
A man-in-the-middle attack occurs when an attacker intercepts session data as it travels over the network.
A cookie-forging attack is another type, in which an attacker alters the apparently read-only data saved in a cookie. Websites that have saved cookies like IsLoggedIn=1 or even LoggedInAsUser=ram have a lengthy history. Exploiting these kinds of cookies is a piece of cake.
That’s why you should never trust anything stored in cookies; you never know who’s been digging around in them.
An attacker can employ session fixation to deceive a user into changing or resetting their session ID.
An attacker uses a session ID that was probably obtained through a man-in-the-middle attack to pretend to be another user.
An attacker in a shopping mall might use the shop’s wireless network to capture a session cookie as an example of the first two. She might then mimic the original user by using the cookie.
An attacker injects potentially hazardous material into a user’s session, which is known as session poisoning. This is normally done via a Web form that the user fills out to set session data.
The above mentioned are some of the ways through which sessions can be forged. Now we will understand how to overcome the threat of session hijacking.
The solution to session hijackingThere are a few broad guidelines that can help you avoid these attacks −
Allowing session information to be included in the URL is never a good idea. The session mechanism in Django simply does not allow sessions to be included in URLs.
Instead of directly storing data in cookies, store a session ID that corresponds to session data maintained on the backend. This is handled automatically for you if you utilise Django’s built-in session framework, request.session. A single session ID is the only cookie used by the session framework. The database stores all of the session data.
If you want to display session data in the template, remember to escape it.
Whenever feasible, prevent attackers from spoofing session IDs. Although detecting someone who has hijacked a session ID is nearly impossible, Django has built-in protection against a brute-force session attack. Session IDs are saved as hashes rather than sequential integers to prevent brute-force attacks, and if a user tries a nonexistent session ID, she will always get a new one, preventing session fixation.
None of those concepts or technologies protect against man-in-the-middle assaults. It’s nearly impossible to identify these kinds of attacks. If logged-in users have access to sensitive data on your site, it should always be served via HTTPS.
Lastly, set the SESSION COOKIE SECURE setting to True if you have an SSL-enabled site; this will force Django to only deliver session cookies over HTTPS.
In these ways, session hijacking can be controlled and monitored.
Why Are Websites Hacked? How To Prevent Hacking?
Why are websites hacked? It’s not true that only top websites are hacked. Smaller websites and blogs are more vulnerable. This post takes a look at why websites are hacked, what to do if your blog is under Cyber Attack and how to prevent stealth attacks, hacking, and reduce risks.
Recently, we faced an attack that lasted for a couple of days. While the popular notion is that only huge corporate houses and government websites are the targets, the opposite also holds true. Smaller websites and blogs are targeted more… in an attempt to use them for larger attacks among other things.
Why are websites hacked? Using websites for a larger attackJust as some of us fear that the Internet of Things could be compromised to be used in DDoS attacks, websites all over the Internet can also be used by attackers to participate in launching a larger-scale attack. Compromising bank websites, corporate accounts, and government website hacking are some examples of large-scale attacks. Often the hackers do not have all the resources. They need a pretty huge number of Bots to process such large attacks, so they compromise smaller websites and keep them in their list until a large attack is planned.
Read: What is a Botnet attack.
Attackers compromise even a blank websiteHackers will compromise even a blank website or blog – to add to their list of resources. If you have built a website that uses something interactive like WordPress or Joomla, you are more prone to attacks compared to static websites.
Many plugins are used, when people use WordPress, for example. Since these plugins are interactive or based on scripts, they are used to launch a massive attack on websites with huge resources. Bandwidth etc. resources are less when it comes to smaller websites, but when we talk of sites like Amazon, the bandwidth is huge and thus, would be difficult to bring it down unless the hackers have an ample number of Bots to launch an attack as huge as to choke the service and bring it down. That’s one of the primary reasons why almost all websites are prone to hacking.
In short, Hackers have their bots crawling all over the Internet to find resources that will help them launch huge attacks. If you start a new website that employs different types of scripts, you will be added to the resource list of hackers within a month of your website launch. When the time comes, they compromise your website and use its resources for a major attack somewhere else.
Using your website resources for financial gainsCybercrime is big! Many times, hackers will try to use your site to direct visitors to:
Some other website that will pay commission to them or
Look-alike websites that will steal your personal and financial information
All they need to do is to insert a link that you won’t know is present on your website. When search engines like Google crawl your site, it will index the malicious link and present it on the results page. If somebody uses that link, they will be directed to some other websites and hackers can make money out of that redirection.
The look-alike, spoof websites are more common as they benefit hackers more by providing them with your information. Once your information – such as email ID or credit card information – is with them, they’ll use it for personal gains.
Read: How do I know if my Computer has been Hacked.
Using websites to compromise your computer or network
Use user computer/network as bots for launching an attack somewhere
Sell user information on places like Darknet for a price
Read: How to remove Coinhive crypto-mining script from your website.
Hacktivists compromise websites for social issuesHacktivists are generally a group of hackers who think they are doing good to society by acting against websites that are against their group’s views. For example, Anonymous threatened Donald Trump after the latter made some remarks against a minority group in the US. I don’t know whether they actually defaced the presidential candidate’s website, but that threat was in the news for a long time. Hacktivists in countries at war, often deface each other’s government websites.
Read: Google Project Shield offers free DDoS protection to select websites.
Revenge Hacking and CompetitionOne of the common reasons for hacking websites is taking revenge or to bring down a competitor’s website so that the person/organization or competitor suffers loss. If your site is popular in a niche and there are plenty of others struggling, they will try to hack or hire a hacker to bring your site down so that users cannot access it for days and lose interest in it.
A DDoS attack, for example, hurts and adds stress to the site owner for a period of time. Most common thing is to bring it down and deface it so that the owner faces a loss of reputation. If there is a successful DDoS attack, chances are they might try to defame the website by inserting bad code that harms its visitors. But if you are prepared already, you shut down the site and fall back on a static mirror as soon as the DDoS starts.
Read: What is Domain Hijacking and how to recover a stolen domain name.
Building a reputation or sheer boredomThere may be some who may do it out of sheer boredom, and then there may be some who may hack a site to simply ‘build a reputation’ and brag about it in their community.
How to prevent hackingThere will always be attempts to compromise your site. But if you are prepared, you can prevent hacking by a good percentage. Think of the following as precautions that will help you:
Use a good web firewall, such as Sucuri, to prevent and shut down the website as soon as an offensive is launched. And make sure that it is configured correctly.
Since the most common method of hackers is to use your own scripts against you, keep only necessary scripts.
Update your blogging software & plugins.
Plugins related to WordPress etc. are often updated, but website owners do not update the ones on their sites as they are unaware or scared to go for the update. They fear the website may be affected as a result. If you are using WordPress or Joomla, you should update the plugins regularly and if anything goes wrong – such as text alignment or something – contact a web designer to get it fixed.
Stay safe. Take these steps to protect & secure your WordPress site.
List of Services that can scan WordPress Malware?There are many free (limited) and Professional (paid) services that can scan your WordPress website on-demand, or you can keep running it in the background. Here is the list of services you can consider:
Wordfence
Sucuri
Security Ninja
iThemes Security
Jetpack
Make sure to go through each, feature, and the pricing.
Digital Twins: An Advanced Technology To Improve Industrial Robots
Using digital twins in the production system will abbreviate the time taken to set up and approve a robotic system
Production systems are getting more adaptable and agile to understand the requirement for more individualized products. Robotics technology can achieve these demands, however, programming and re-design of robots are related to significant expenses, particularly for small and medium-sized undertakings As an aspect of the artificial intelligence and machine learning revolution, robots today can settle on real-time decisions dependent on data sources, for example, cameras (two or three dimensional), force and torque sensors and lidar. These empower robots to perform industrial operations that before must be performed by people, for example, part or product detection, random part grasping, assembly, wiring and so on. Machine learning algorithms, for example, artificial deep neural networks are the ‘minds’ behind these complex robotic abilities. As opposed to traditional programming, a machine learning algorithm isn’t programmed, rather it is prepared for explicit tasks by giving it genuine instances of the task result. A digital twin is a virtual model of an industrial robot, though the genuine robot works in synchrony with its virtual twin. This implies that algorithms are utilized to interface different links and sensors of a specific computer model to a real robot, shaping a couple of digital twins. While at present, the sign goes from a digital twin to a real robot and back with some postponement, it will work easily in the states of a 5G network. The areas of utilization of industrial robots for digital twins range from the digital business and mechanical engineering to assembling of self-driving vehicles. One benefit of digital twins lies in the way that while an industrial robot is working, another operation can be programmed on the digital twin and tried in simulations simultaneously. This is a huge accomplishment, given the way that 1 minute of an assembling cycle done by an industrial robot requires 45 minutes of programming that should now be possible without intruding the assembling process. Another value of digital twins is altogether improved safety, for example, no physical human presence is needed to address or reinvent robotics algorithms, the tasks can be done virtually, for example by a remote controller. By using the digital twin of the production system and the product, it is presently conceivable to essentially abbreviate the time taken to set up and approve a robotic system with incorporated vision and machine learning. Subsequently, you can accomplish powerful and reliable results faster and at much lower costs. In a virtual environment, the real robot, parts and camera are supplanted with virtual ones. Rather than investing a ton of energy and assets on setting up the hardware, catching numerous pictures and manually annotating them, it is currently conceivable to do so effectively and automatically within a virtual environment. The subsequent stage is to change from virtual to physical – the real equipment is set up and incorporated. The machine learning algorithm may require some extra training with pictures caught from the real camera. Notwithstanding, since the machine learning algorithm is now pre-trained in the digital twin, it will require fundamentally less real example pictures to accomplish an exact and vigorous outcome, subsequently, it will diminish the physical authorizing time, resources and re-work.
Here’s How We Prevent The Next Racist Chatbot
It took less than 24 hours and 90,000 tweets for Tay, Microsoft’s A.I. chatbot, to start generating racist, genocidal replies on Twitter. The bot has ceased tweeting, and we can consider Tay a failed experiment.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI chúng tôi gerry (@geraldmellor) March 24, 2023
The bot, which had no consciousness, obviously learned those words from some data that she was trained on. Tay did reportedly have a “repeat after me” function, but some of the most racy tweets were generated inside Tay’s transitive mind.
Life after TayHowever, Tay is not the last chatbot that will be exposed to the internet at large. For artificial intelligence to be fully realized, it needs to learn constraint and social boundaries much the same way humans do.
Mark Riedl, an artificial intelligence researcher at Georgia Tech, thinks that stories hold the answer.
“When humans write stories, they often exemplify the best about their culture,” Riedl told Popular Science. “If you could read all the stories a culture creates, those aspects of what the protagonists are doing will bubble to the top.”
By training artificial intelligence systems to read stories with upstanding protagonists, Riedl argues that we can give machines a rough moral reasoning.
The technique that Riedl has devised, called Quixote, places quantifiable value on socially appropriate behavior in stories. This reward system reinforces good behavior, and punishes bad behavior, which is simulated by the A.I. algorithm.
This is all in the pursuit of making artificially intelligent algorithms act like protagonists in books, or even good, ordinary people.
In Tay’s case, a chatbot could have been taught about social guidelines in talking about gender, race, politics, or history. By emulating fictional personas, we can actually build morals into the way that the machine makes decisions. This, of course, could work both ways. In theory, someone could also make malicious robots, but Riedl says that in most published fiction the antagonist is punished, so it would be a bit more difficult of a task.
Riedl’s paper, presented at the AAAI Conference on Artificial Intelligence, suggests a scenario in which a robot has to buy perscription drugs at a pharmacy. The path of least resistance for the robot is to identify and take the drugs, stealing them. But when trained on a series of stories, the algorithm learns that it’s better to wait in line, produce a prescription, pay, and leave. It should be noted that this research is in it’s infancy and not being applied to real robots, but run in simulations.
In scenarios such as Tay.ai’s deployment, Microsoft wanted to create a friendly, conversational bot.
“I think it’s very clear that Tay doesn’t understand what it’s saying,” Riedl said. “It goes way beyond having a dictionary of bad words.”
Riedl is optimistic, and thinks that as we refine these systems by putting ethics or morality in before rather than retroactively, they’re going to lean towards becoming better as they learn about humanity, instead of worse.
“All artificial intelligence systems can be put to nefarious use,” he said. “But I would say it’s easier now, in that A.I. have no understanding of values or human culture.”
Showing the cardsBut while any algorithm that generates speech in public has the potential to gaffe, Nicholas Diakopoulos, an assistant professor at the University of Maryland who studies automated newsbots and news algorithms, says that Microsoft could have mitigated reactions by being more open with their training data and methodology.
“Being transparent about those things might have alleviated some of the blowback they were getting,” Diakopoulos said in an interview. “So people who perceive something as racial bias can step into the next level of detail behind the bot, step behind the curtain a little bit.”
Diakopoulos calls this “algorithmic transparency.” But he also makes the point that algorithms aren’t as autonomous as commonly believed. While Tay was made to say these racist, sexist remarks, there were mechanisms that strung those words together. Those mechanisms have human creators.
“People have this expectation of automation being this unbiased thing. There’s people behind almost every step of it being built. For every little error or misstep the bot, maybe you could try to trace [it] back,” Diakopoulos said.
Who’s to blame for Tay’s bad words?Laying blame for the statements made by Tay is complex.
Alex Champandard, an A.I. researcher who runs neural network painting Twitterbot @DeepForger, says that you could make most reply bots generate incendiary tweets, without the owner being able to control what happens. His own bot is based on images, which is a much more complex to protect from harassment than blocking certain phrases or words.
As far as Tay, Champandard says that Microsoft was naive, and made a technical solution without considering what people could put submit. He says this underlies an existing problem with machine learning chatbots in general.
“I believe most Reply Bots are and will be vulnerable to attacks designed to make political statements,” Champandard wrote in a Twitter DM. “This type of behavior is reflective of Twitter’s general atmosphere, it happens even if only 0.05% of the time.”
He doesn’t think a blacklist for bad words is the answer, though, either.
“No finite keyword banlist will help resolve these problems.” he writes. “You could build a whitelist with specific allowed replies, but that defeats the purpose of a bot; what makes it interesting is the underlying randomness.”
That randomness is a reflection of Twitter itself; “a lense through which we see current society,” says Champandard. There’s good and bad—tweets can be straight fire or cold AF.
If Microsoft’s experience with its A.I. Twitterbot Tay taught us anything, it’s that we still have a long way to go — both in terms of our A.I. programming, and in terms of making our human society more humane and civil.
As AI further integrates into our society, for every AlphaGo there will be at least five Tays. Failures will happen along the way, so be it!
— Alex J. Champandard ❄️ @[email protected] (@alexjc) March 24, 2023
Update the detailed information about Miniature Robots To Patrol The Pipe Network To Prevent Leaks on the Kientrucdochoi.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!