FDA Working on New Way to Approve AI and ML Software in Medical Devices 


The FDA is charting a path for companies to gain approval for medical devices incorporating AI software, with an action plan outlining measurable steps.(Credit: Getty Images)  

By AI Trends Staff  

The US Food and Drug Administration (FDA) in February issued an AI and Machine Learning Software as a Medical Device Action Plan, outlining the way forward for manufacturers seeking the agency’s approval for their innovative medical devices incorporating AI.  

“Artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day,” state the report authors at the outset.  

The report is a follow-up to a 2019 effort to begin the discussion of how to proceed with medical devices using AI, in response to requests that the agency update its approval process. “The FDA’s traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and machine learning technologies,” state the FDA plan authors.  

The FDA action plan looks for a commitment from manufacturers as to how the device will perform, with a process for periodic updates as the device is developed. In this way, the FDA and manufacturers would be able to evaluate the software product from premarket development to post-market performance. This would enable the FDA to embrace the power of AI and ML-based software in a medical device, while ensuring patient safety. 

Bakul Patel, director, Digital Health Center of Excellence, FDA

“This action plan outlines the FDA’s next steps towards furthering oversight for AI/ML-based SaMD [Software as a Medical Device],” stated Bakul Patel, director of the Digital Health Center of Excellence for the FDA, stated when the action plan was released, according to an account in HealthITAnalytics. 

“The plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care… To stay current and address patient safety and improve access to these promising technologies, we anticipate that this action plan will continue to evolve over time,” he stated. 

Review Finds FDA Medical Device Testing to be Limited So Far 

Researchers at Nature decided to evaluate how well the FDS is addressing issues of test data quality, transparency, bias, and algorithm monitoring in practice, according to the HealthITAnalytics account. The Nature team aggregated 130 AI devices approved by the FDA between January 2015 and December 2020, and assessed how well they performed. 

The review showed that 126 of the 130 AI devices underwent only retrospective studies at their submission, that is, information collected about the past. None of the 54 high-risk devices were evaluated by prospective studies that would look at data over time. 

“More prospective studies are needed for full characterization of the impact of the AI decision tool on clinical practice, which is important, because human–computer interaction can deviate substantially from a model’s intended use,” stated the Nature report authors. 

Of the 130 AI devices analyzed, 93 devices did not have publicly reported multi-site assessment as part of the evaluation study, the Nature researchers found. Of the 41 devices with the number of evaluation sites reported, four devices were evaluated in only one site and eight devices were evaluated in only two sites. 

“This suggests that a substantial proportion of approved devices might have been evaluated only at a small number of sites, which often tend to have limited geographic diversity,” the researchers noted. 

Necessary Data to Build AI SaMD Challenging to Assemble  

A recent report from the Pew Charitable Trust outlined some challenges facing AI SaMD manufacturers seeking FDA approval.  

For example, AI algorithms need to be trained on large, diverse datasets to work effectively across a variety of populations and settings, and to ensure they are not biased. “However, such datasets are often difficult and expensive to assemble because of the fragmented U.S. healthcare system, characterized by multiple payers and unconnected health record systems,” state the Pew report authors.   

An analysis conducted in 2020 of data used to train image-based diagnostics AI systems found that some 70% of the included studies used data from three US states; 34 states were not represented at all. This would risk missing variables such as disease prevalence and socioeconomic differences, the report authors noted.   

Moreover, assembling sufficiently large patient datasets for AI-enabled programs can raise complex questions about data privacy and the ownership of personal health data,” the Pew authors stated. They noted some startups developing their AI-based programs are using patient data sometimes without the consent of the patient, ensnaring them in an ongoing debate around consent, and whether patients who do share data should share in profits.   

In their conclusion, the Pew authors stated, “The FDA is attempting to meet these challenges and develop policies that can enable innovation while protecting public health, but there are many questions that the agency will need to address in order to ensure that this happens.” 

Medical Device Manufacturer Cites Promising Uses for SaMD   

Lorenzo Gutierrez, Microfluidic Manager, StarFish Medical

An experienced medical device manufacturer knows the potential of a breakthrough technology. Whoever for example is able to develop an effective early detection tool for lung cancer would be in a good position. “Using AI as an early detection tool has a strong probability of being a game changer,” stated Lorenzo Gutierrez, Microfluidic Manager for StarFish Medical, a medical device design, development and contract manufacturing company based in British Columbia, Canada, in a blog post  

That outcome seems very possible. According to a 2019 study cited by Gutierrez, a deep learning algorithm achieved a lung cancer detection performance of 94.4% based on 6,716 cases. That result outperformed human radiologists by an 11% reduction in false positives and a five percent reduction in false negatives.  

In the medical device space, he identified promising target uses for AI SaMD, including: 

Diagnosis of heart diseases. A machine learning algorithm (myocardial-ischemic-injury-index) incorporating age and sex paired with high-sensitivity cardiac troponin I concentrations was used to train an AI platform utilizing data from 3,013 patients. The platform was then tested on 7,998 patients with suspected myocardial infarction. It was found to outperform physicians, with a sensitivity of 82.5% and a specificity of 92.2%.  

Detecting retinopathy. Diabetic retinopathy (DR) is one of the leading causes of preventable blindness globally. In a study published by American Academy of Ophthalmology, a total of 75,137 publicly available fundus images from diabetic patients were used to train and test an artificial intelligence engine to differentiate healthy fundi from those with DR. The results showed an impressive 94% and 98% sensitivity and specificity, respectively. 

Biosensors for monitoring vital signs. Biosensor-based devices generate huge data sets. AI could be used to predict the trends and the probability of disease occurrence. The integration of AI in cardiac monitoring-based biosensors for point of care (POC) diagnostics are a good example. Machine-learning algorithms are used with microchip-based cardiac biosensors for real-time health monitoring and to provide accurate clinical decisions in a timely manner. 

 

Read the source articles and information in the AI and Machine Learning Software as a Medical Device Action Plan from the FDA, in HealthITAnalytics, from the Pew Charitable Trust and in a blog post from StarFish Medical. 

Skeptics Leery of the Billions Being Invested in Autonomous Vehicle Software 


While billions are being invested in autonomous driving software companies, some are skeptical about the investments ever paying off. (Credit: Getty Images) 

By AI Trends Staff  

While billions of dollars are being invested in self-driving car software systems, skeptics are saying it’s a bottomless pit and a new approach is needed.  

Estimates of how big the market opportunity is vary widely. Lux Research is estimating the potential opportunity of the self-driving car market to be $87 billion by 2030, according to a recent report from GreyB Services, a technology research company based in India. Another estimate from Allied Market Research sizes the market at $557 billion by 2026.  

Missy Cummings, director, Humans and Autonomy Laboratory, Duke University

More questions are being raised as to whether these are good investments that will pay off. One skeptic is Missy Cummings, the director of the Humans and Autonomy Laboratory at Duke University. In a recent interview in Marketplace Tech, she stated, “You are starting to see all the mergers across the automotive industry where companies are either teaming up with each other or with software companies, because they realize that they just cannot keep hemorrhaging money the way they are. But that pit still has no bottom. And I don’t see this becoming a viable commercial set of operations in terms of self-driving cars for anyone anywhere, ever, until we address this problem.” 

The problem she refers to is the basic approach to autonomous driving software; She does not believe that neural nets or convolutional neural nets, are capable of the learning required to ensure safe driving. She describes three camps of developers working on self-driving cars, robotics and AI in general:   

“There’s the camp of people like me who know the reality. We recognize it for what it is, we’ve recognized it for some time, and we know that unless we change fundamentally the way that we’re approaching this problem, it is not solvable with our current approach,” stated Cummings, who has a PhD in systems engineering from the University of Virginia, and is a veteran naval officer and military pilot.   

“There’s another larger group of people who recognize that there are some problems but feel like with enough money and enough time, we can solve it. And then there’s a third group of people that—no matter what you tell them—they believe that we can solve this problem. And you can’t talk them off that platform,” she stated.  

She sees the departure of John Krafcik as CEO of Waymo in April as a sign. 

John Krafcik stepped down as CEO of Waymo in April

Krafcik had been running Waymo since 2015, when it was still a unit of Google known as the Google Self-Driving Car Project, according to an account in Motor AuthorityHe oversaw Waymo’s transition into a standalone company in 2016 and the launch of the Waymo One self-driving taxi service in Phoenix, Arizona, in 2018. 

While it was not stated this way by Waymo, Cummings saw the Krafcik departure as a type of surrender. “I have been trying to tell people… that we just can’t solve this problem in the way that you think we’re going to. We need to completely clean this sheet and start over.”  

She sees the exits of Uber and Lyft from the self-driving car software business as additional acknowledgements that the investments are far from paying off. In December, Uber sold its self-driving car software unit to startup Aurora, and in April, Lyft sold its self-driving technology unit to Toyota for $550 million, according to an account in Business Insider. 

Over 250 Companies Working on Autonomous Driving 

Meanwhile, the spending and investments continue at a torrid pace. The Greg B report states over 250 autonomous vehicle companies are working on autonomous driving technology, including automakers, technology providers, services providers, and tech startups.   

When a startup is perceived to have an innovation, the big players look at them to try to get an edge. For example, Amazon acquired the six-year startup Zoox in June 2020 for $1.2 billion, eyeing it for use in its logistics network. The founders of Zoox included Tim Kentley-Klay who has developed self-driving technology at Stanford University.   

Argo AI, based in Pittsburgh, is another autonomous driving startup the analysts cited. It was founded in 2016 by Brian Salesky and Peter Rander, veterans of the automated driving programs at Google and Uber. Its investors include Volkswagen, in for $1 billion, and Ford Motor, which in 2017 had also invested $1 billion over five years and continues its partnership.  

The founders of startup Aurora had a different idea. Founders Chris Urmson, Sterling Anderson, and Drew Bagnell had worked for Google’s Waymo, Tesla’s Autopilot, and Uber’s autonomy projects respectively. Aurora chose to make software and hardware that can be custom-fitted to non-autonomous vehicles to make them driverless. In July 2020, the company also announced plans for an autonomous truck. In a statement, it said it saw the market as having the best economics and level of service requirements that are “most accommodating.”  

Late last year, Aurora acquired Uber’s Advanced Technology Group in a deal that also brought an investment of $400 million into Aurora.   

Motional Has Deal with Eversource to Collect Data  

An autonomous driving startup in Boston has a different approach to generating revenue while waiting for its ship to come in. Motional, a joint venture between Hyundai Motor Group and technology company Aptiv, is running a pilot program with Eversource, New England’s largest utility, according to an account in ModernShipper.  

The program is using Motional vehicles operating within Eversource’s service territory of Massachusetts, New Hampshire, and Connecticut to collect data and information on Eversource’s utility infrastructure and report that data back to the utility. 

“We believed the sensor technology on our vehicles could serve multiple purposes, capture real-time data on energy infrastructure, and ultimately, lead to fewer outages and better service for customers,” Motional stated in a blog posting. 

A Motional spokesperson told Modern Shipper the company’s vehicles are collecting information on electric poles and wires. The insights will be used in the utility’s preventive maintenance program, as we as to monitor for ongoing repairs, and assess damage from severe weather events.  

“At Eversource, we’re focused every day on innovative solutions to lower costs, enhance reliability and advance clean energy for our customers and communities throughout New England,” stated Jaydeep Deshpande, Eversource program manager for substation analytics. “With Motional, we have one of the leaders in the autonomous vehicle industry right in our backyard. This partnership will be focused on developing future inspection solutions by combining Motional’s state-of-the-art vehicle platform with our in-house machine learning tools.” 

 

Read the source articles and information from GreyB Servicesin Marketplace Tech, in Motor Authority, in Business Insider and in ModernShipper.  

AI/ML Applied to Software Testing Improving Speed, Accuracy 


QA engineers incorporating AI to test software applications using AI are entering a new era with its own tools and knowledge base.(Photo by David Travis on Unsplash) 

By John P. Desmond, AI Trends Editor  

Before AI, software testing was a crucial step in the software development life cycle. After AI, it still is. But now AI can help with the testing.  

AI and machine learning are being applied to software testing, defining a new era that makes the testing process faster and more accurate, according to a recent account from AZ Big Media.  

The authors outline benefits of AI applied to software testing as:   

Improved automation testing. Quality assurance engineers spend time performing tests to ensure new code does not destabilize existing, functioning code. As more features and functions are added, more code needs to be tested, potentially overwhelming QA engineers. Manual testing becomes impractical.   

Tools to automate testing can run tests repeatedly over an extended period. The addition of AI functions to these tools is powerful. Machine learning techniques will help the AI testing bots evolve with the changes in the code, learning and adapting to the new functions. When they detect modifications to the code, they can determine whether it is a bug or a new feature. The AI can also detect whether minor bugs can be tested on a case-by-case basis, speeding up the process even more.     

Assistance in API testing, which developers use to evaluate the quality of interactions between different programs communicating with servers, databases, and other components. The testing ensures that requests are processed successfully, that the connection is stable and the user gets the correct output.   

The addition of AI to this process helps to analyze the functionality of connected applications and create test cases. The AI is capable of analyzing large data sets to identify potentially risky areas of the code.  

QA Engineers Will Use Different Tools and Expertise to Test AI Apps 

Paul Merrill, principal, Beaufort Fairmont

As AI moves into testing, the tools used by QA engineers to perform testing will change. In an account in TechBeacon, author Paul Merrill relates an anecdote from Jason Arbon, the CEO and founder of test.ai, a company that uses AI to test mobile apps. Arbon also worked at Google and Microsoft as a developer and tester. He wrote the book, How Google Tests Software (2012).   

Arbon tells his kids about old days when he had a car with manual window cranks, and they laugh. Soon, QA engineering will be laughing at the notion of selecting, managing, and driving systems under test (SUT). “AI will do it faster, better, and cheaper,” Merrill stated. 

test.ai offers bots that explore an application, interact with it, extract screens, elements and paths. It then generates an AI-based model for testing, which crawls the application under test on a schedule determined by the customer. On the site is the statement, “Go Beyond Legacy Software Test Automation Tools.”  

The founders of Applitools, offering a test automation platform powered by what it calls “Visual AI,” describe a test infrastructure that needs to support expected test results from the same data that trains the decision-making AI. “This varies greatly from our current work with systems under test,” stated Merrill, who is a principal at Beaufort Fairmont, software testing consultants based in Cary, N.C.   

Angie Jones, senior director of developer relations, Applitools

He describes the experience of Angie Jones, former senior software engineer in test at Twitter, writing in a recent article in 2017 titled, “Test Automation for Machine Learning: An Experience Report.”  Jones described how she systematically isolated the learning algorithms of the system from the system itself, isolating the current data in order to expose how the system learns and what it concludes based on the data she gives it. Jones is now senior director of developer relations at Applitools.  

Merrill poses these questions, “Will processes such as these become best practices? Will they be incorporated into methodologies we’ll all be using to test systems?” 

About AI in testing, the cofounders of Applitools, Moshe Milman and Adam Carmi, were quoted  by Merrill as stating, “First, we’ll see a trend where humans will have less and less mechanical dirty work to do with implementing, executing, and analyzing test results, but they will be still integral and necessary part of the test process to approve and act on the findings. This can already be seen today in AI-based testing products like Applitools Eyes.”  

About this, Merrill states, “When AI can make less work for a tester and help identify where to test, we’ll have to consider BFF status.”  

Describing the skills needed by AI testers, Milman and Carmi state on the Applitools blog, “Test engineers would need a different set of skills in order to build and maintain AI-based test suites that test AI-based products. The job requirements would include more focus on data science skills, and test engineers would be required to understand some deep learning principles.“  

Four Approaches to AI in Software Testing Outlined 

Four AI-driven test approaches were described by an account entitled AI in Software Testing: 2021, on the site of TestingXperts, a software testing company based in Mechanicsburg, Pa.  

The four approaches are: differential testing, visual testing, declarative testing and self-healing automation.  

In differential testing, QA engineers classify differences and compare application versions over each build. 

Example products supporting this include Launchable, which is based on an ML algorithm that predicts the likelihood of failure for each test based on past runs and whenever the source code changes under test. This tool lets the user record the test suite so that tests that are likely to fail are run first. One can choose this tool to run a dynamic subset of tests that are likely to fail, thereby reducing a long-running test suite to a few minutes.  

In visual testing, engineers test by look and feel of an application by leveraging image-based learning and screen comparisons. Example products incorporating this include the platform from Applitools, with its Visual AI features, including Applitools Eyes which helps to increase test coverage and reduce maintenance. The Ultrafast grid is said to help with cross-browser and cross-device testing, speeding up functional and visual testing. The Applitools platform is said to integrate with all modern test frameworks and works with many existing testing tools including those from like Selenium, Appium and Cypress. 

In declarative testing, engineers aim to specify the intent of the test in a natural or domain-specific language, then the system decides how to perform the test. Example products include Test Suite from UIPath  used to automate a centralized testing process, and through robotic process automation helping to build robots that execute tests. The suite includes tools for testing interfaces, for managing tests and for executing tests.  

Also, tools from Tricentis aim to allow Agile and DevOps teams to achieve their test automation goals, with features including end-to-end-testing of software applications. The tool encompasses test case design, test automation, and test data design, generation and analytics.  

In self-healing automation, the elements selected to test are automatically adjusted to changes in the UI. Example products include Mabi, a test automation platform built for continuous integration and continuous deployment (CI/CD). Mabi crawls the app screens and runs default tests common for most applications; it uses ML algorithms to improve test execution and defect detection.  

 

Read the source articles and information from AZ Big Media, in TechBeacon, in“Test Automation for Machine Learning: An Experience Report”  from Angie Jones, on the Applitools blog and from AI in Software Testing: 2021 on the site of TestingXperts.

The Key Regulatory Hurdles Facing AI Autonomous Cars 


Some 30 states have specific regulations on the books about self-driving cars; most of the other states are grappling with draft provisions.  (Credit: Getty Images) 

By Lance Eliot, the AI Trends Insider   

You might be surprised to learn that there is a regulatory enigma of sorts that surrounds and engulfs the advent of AI-based self-driving cars.   

How so?   

Nobody is quite sure what to make of the beast, as it were, in terms of what size and shape of regulatory rules and legal oversight is best warranted for the emergence of self-driving cars.   

Some emphatically insist that we should live and let live, meaning that the marketplace should decide how, where, when, why, and other crucial elements about the deployment of self-driving cars. Others exhort that this is a high-tech innovation that abundantly needs and altogether requires demonstrative legislative controls and strict guidance (it is, after all, akin to a computer on wheels).   

Some reside at nearly polar opposites of what should be done. 

In addition, some sit somewhere in between and are trying to find some agreeable middle ground. One thing that most everyone acknowledges is that self-driving cars are important, and they are especially notable since they involve clear-cut life-or-death kinds of considerations.   

Cars by their very nature are intertwined with life and death.   

We ought to acknowledge that right upfront. We make use of cars on our public roadways. They are an essential form of transportation. Plus, our society has culturally ingrained itself into cars as more than simply a transit option. Car driving is depicted in our movies and TV shows. People discuss cars and the driving of cars as regularly and routinely as they discuss the clothes they wear or what kind of food they eat.   

Teenagers see the driving of a car as a key step toward adulthood and bask in the freedom they can inure by being able to drive a car.  For teens, possessing a driver’s license is a huge source of personal identity and illuminates their rites of passage into the world as independent beings. There has been a tad bit of waning on this heretofore life monumental heralding as some newbie drivers decide to wait to drive, but the preponderance is still in effect.   

Adults are likewise culturally wrapped into the semblance of cars, either owning cars or riding in cars. Though there are many efforts to bolster mass transit, there is nonetheless a kind of loyalty stickiness that cars are conveniently able to get you from point A to point B, directly so. That’s a tough advantage to somehow undercut or convince is unworthy.   

The costs associated with cars are of course tremendous. Not just the cost of buying a car or maintaining a car. Via our cars, in the United States alone, there are about 40,000 annual car crash-related fatalities and about 2.5 million associated injuries. This is how the catchphrase of life-or-death enters into the equation.   

Each time that you get behind the wheel of a car, you are about to enter into a driving journey that might end badly. We don’t think about driving in those kinds of terms. It would be perhaps overly dreadful. The base assumption is that you will presumably make your driving trek safely, doing so entirely absent of getting struck by another car, and likewise avert plowing into a nearby car. For the additional and somewhat somber stats that describe the chances of that kind of scratch-free driving voyage.   

Okay, cars are readily at the top of mind. That seems apparent. Self-driving cars are gradually getting to that same pinnacle, though they are more so a curiosity right now than a real-world consideration per se by most. You see ongoing news stories about self-driving cars. If you perchance live in one of a select few places in the country, you can likely witness self-driving cars going down the street.   

All in all, realistically, you probably have much greater odds of encountering a horse or a zebra face-to-face than you do a self-driving car right now. 

That’s not to suggest that anticipating the advent of self-driving cars should be something on the back burner. Heck no. There are occasions when an innovation has been unleashed and there was insufficient preparation involved, allowing rather untoward results that could have been prevented beforehand. This is the proverbial problem of waiting until the horse is out of the barn that we always should be mindful of.   

Counterbalancing the desire to make sure that all is readied would be the qualm that the preparations themselves could put a crimp on progress. If an innovation isn’t given sufficient room to breathe, it could either never see the light of day or at least might be summarily delayed. And, in the case of self-driving cars, since the assumption portends that self-driving cars will likely reduce the annual number of car crash fatalities, each day of potential undue delay is a costly price to pay.   

Notice that the word “undue” was intentionally utilized. It is easy to claim that any delay is too much. The rub is that there is possibly a Goldilocks level of delay. This means that there is a likely tradeoff between potentially enabling or stoking car crash fatalities versus reducing or eliminating car crash fatalities (see my columns for in-depth discussions on this). 

We return to the matter at hand, which is what about the regulations as they might pertain to self-driving cars. 

There are plenty of regulations about cars; there are plenty of regulations about car drivers. 

You might certainly be tempted to assume that we must already have more than enough regulations to cover the matter of cars that drive. This just seems to make sense in the face of things.   

Where the confusion enters into this picture is the notion that self-driving cars are a two-in-one combo deal. That’s not how regulations have been previously devised.   

You see, a true self-driving car is not just a car. It is also in a sense a driver. You have a car that drives itself. As such, we are apt to find that existing regulations about cars are not on par with such a seemingly oddball creation. 

Regulations about cars as strictly being an automobile, namely a thing that can carry us around, do not particularly veer into the realm of being a car driver. Regulations about car driving do not particularly roam in the realm of what a car or automobile is or must provide.

Self-driving cars have brought together those two otherwise somewhat disparate topics (they are absolutely related topics, but so far treated ostensibly separately, in a manner of speaking). This ends up becoming a kind of loophole in existing regulations. As mentioned earlier, some are adamant that we must close those loopholes immediately. Others say that we ought to wait and see. 

The added twist is that the federal government aims at dealing with the aspects of the car, while the states tend to focus on the car drivers.   

If the states were to branch further out from their attention to encompass the nature of cars themselves, going ergo beyond the role of car drivers, they would be on a slippery slope of usurping the federal regulations, it would seem. Meanwhile, you could similarly contend that if the federal government opts to dive into the particulars about car drivers, this would seem to stray into regulatory territory typically reserved for the states.   

Yes, believe it or not, the emergence of self-driving cars brings forth a conundrum of the classic push and pull of federal rights versus states rights. Kind of shocking to think that simply automating the driving task would get into such a thorny patch, but it most certainly does.   

Here is an intriguing question that is worth pondering: What are the top four Congressional issues when it comes to the advent of AI-based true self-driving cars?   

I’m limiting this discussion to just the four most prominent issues. There are literally dozens of issues that could be debated, and the resulting dialogue undoubtedly would go far beyond the space limits for this piece.   

You might quibble with the four that I have chosen for discussion. I’ll explain in a moment how I boiled down the whole kit and caboodle into just four of the major issues. And, for those of you interested in further knowing about those other issues not addressed herein, I’ve covered or will be covering the myriad of relevant issues in my ongoing columns.   

Before jumping into the details, I’d like to clarify what is meant when referring to true self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). 

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Congressional Top Issues   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers. The AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

I had earlier indicated that the Congressional issues underlying the advent of self-driving cars would be culled down into four key major aspects. The four do not represent the entirety of the many topics confounding and confronting the many questions we have yet to resolve about self-driving cars. But these four are certainly paramount and are plentifully worthwhile for devoted attention. 

The four aspects are: 

  • Feds v. States: Congressional regulatory scope versus the states when it comes to self-driving cars 
  • Exemptions Latitude: Deciding how far existing regulatory provisions should be stretched for self-driving car testing on public roadways 
  • Cybersecurity Requirements: To what degree should regulations mandate cybersecurity of self-driving cars 
  • Roving Eye Data Rights: Figuring out the regulatory landscape encompassing stakeholder data rights of the self-driving car roving-eye   

I’ll briefly share with you the nature of the regulatory challenges underpinning each of the four aspects.   

Hopefully, doing so will inspire you to become further interested in and possibly get directly involved in trying to resolve these controversial issues. While discussing the four areas, there is a handy U.S. Congressional Research Service (CRS) report labeled as R45985 and entitled “Issues in Autonomous Vehicle Testing and Deployment” written by Bill Canis that I will be quoting from time to time and serves to aid in highlighting the wrangling involved in these matters.   

You might also find noteworthy that the nearest that Congress has come to potentially establishing regulations specifically about self-driving cars has been the House of Representatives H.R. 3388 known as the SAFE DRIVE ACT that is dated September 2017, and a separate effort by the Senate S. 1885 known as the AV START ACT that is dated November 2107 (see my columns for coverage). Neither of those was passed into law. There have been other Congressional efforts since then that have either sought to resuscitate those proposed regulations or that have somewhat tried to go a different route, but such efforts have not been (shall we say) as pronounced as the H.R. 3388 and the S. 1885.   

In terms of regulations by the states, some 30 states now have specific regulations on the books about self-driving cars. Most of the other states are either in the midst of grappling with draft regulatory provisions or have had strident discussions about what might be warranted. See my columns for coverage.   

Okay, now we can leap into the fray and examine each of the four major aspects. Please fasten your seatbelt.   

Feds v. States 

Issue: Congressional regulatory scope versus the states when it comes to self-driving cars 

I already tipped my hand about this topic. As earlier stated, one of the most vexing questions is who is the boss. Traditionally, Congress and the federal government focus on the nature of the vehicle. The states tend to handle the driver and elements entailing driver registration and suitability for being behind the wheel. 

A self-driving car is an all-in-one combination: It is a car. And it is a driver.   

Some would suggest that this means that you cannot readily have two bosses anymore. One boss needs to oversee both the car and the driver. It would be hard to cope with continuing to allow the states to deal with the AI driving system while the feds stay occupied with just the car. You can’t split the baby like that (a perhaps crude but illustrative saying).   

One obvious answer is that you might anoint the federal government to be solely responsible for regulating self-driving cars. 

This is said to make sense due to the notion that there would be one overarching scheme that would encompass the entire country. Without that kind of universal role, you would seemingly have a Byzantine series of state-by-state differences. That could be a potential nightmare for the automakers and self-driving tech firms, having to make sure that their self-driving cars are able to fulfill a potential muddled patchwork of idiosyncratic state-derived laws.   

Imagine that a self-driving car comes up to the border between one state and another state, coming to a screeching halt because it is not considered suitable under the regulations of that other state. Passengers would not be enthralled. A farfetched scenario is that you would have the self-driving car stop at the hub set up at the border of the two states. You would get out of one self-driving car and get into a different self-driving car that was properly approved for use in that other state.   

So much for interstate commerce.   

This all seems to point out that Congress does have the primary seat in all of this. Ouch, those are fighting words, exclaim the states.   

The states would argue vehemently that they ought to be able to decide what happens regarding self-driving cars within their state borders. Congress might not be as tuned to the needs of each specific state and therefore provide a bland generic regulatory approach that overrides the desires and concerns of specific states. Our country works satisfactorily now with the states deciding on the car driver elements, and there is no reason to assume that they cannot handle the same when it comes to self-driving cars.   

Round and round we go.   

Here’s a quick summary of the debate: “The extent to which Congress should alter the traditional division of vehicle regulation, with the federal government being responsible for vehicle safety and states for driver-related aspects such as licensing and registration, as the roles of driver and vehicle merge” (per CRS R45985).   

Exemptions Latitude 

Issue: Deciding how far existing regulatory provisions should be stretched for self-driving car testing on public roadways 

Based on existing regulations, the spate of self-driving cars rolling around on public roadways will likely need to seek various exemptions from prevailing federal standards. Those federal standards were crafted based on the assumption that there would be a human driver at the steering wheel and operating the driving controls. 

One anticipated overhaul of car design will be that self-driving cars will not have any human-accessible driving controls. Out goes the steering wheel. Out go the pedals. They aren’t needed. This frees up the interior space of a car. Numerous car designs have been proposed that would make use of this newly freed up roominess.   

Perhaps there will be swivel chairs in self-driving cars, allowing the passengers to turn this way and that way during a driving journey. They don’t need to always be facing forward anymore. We might also find ourselves taking naps or getting an entire snooze while riding in a self-driving car. In that case, the seats might be reclinable or be replaced with beds of some kind.   

Anyway, cars today are supposed to have driving controls. Furthermore, the driving controls must meet distinctly specified guidelines. That’s why you can pretty much go up to any car, get in, and instantly start to drive it. Without those guidelines, presumably, each car would have some proprietary form of driving controls and you would expend a lot of concentration on just trying to master how to drive the darned thing.   

You might be thinking that the feds ought to go ahead and provide as many exemptions as are requested. Any holdup on exemptions would delay those self-driving car testing efforts. Do not let paperwork get in the way of progress, some urge.   

Well, that is a pretty risky way to do things, the counterargument goes. You are letting these unproven self-driving cars zip around on our public roadways. The willy-nilly approval for those exemptions is bound to mean that unsound self-driving cars are cruising next to you on your local highways and byways. 

Do you really want to take a chance like this?  

They would fervently caution that we might be opening Pandora’s box. Have the self-driving car get fully tested on proving grounds and closed tracks, long before being placed onto our streets and given exemptions. Likewise, compel the automakers and self-driving tech firms to first showcase via computer-based simulations that their self-driving cars are ready and able to handle the harsh and dangerous real-world of public roadway driving.   

The counter to that counterargument is that we won’t see the attainment of self-driving cars for a very long time if those exemptions aren’t allowed.   

As an aside, and as steadfast readers know, I have repeatedly warned that we are going to have some attempts at trying to use the exemption granting as a kind of boogieman, seeking to make the feds into a seeming bureaucratic boondoggle and ostensibly preventing progress. There will be cases that this indeed might be true, but there are going to be other cases where this prevention is worthy and yet will be cast as though it is untoward.   

Anyway, round and round we go.   

Here’s a quick summary of the debate: “The number of autonomous vehicles that NHTSA should permit to be tested on highways by granting exemptions to federal safety standards, and which specific safety standards, such as those requiring steering wheels and brake pedals, can be relaxed to permit thorough testing” (per CRS R45985). 

Cybersecurity Requirements   

Issue: To what degree should regulations mandate cybersecurity of self-driving cars   

The computer inside your laptop is a target for cybercrooks. You already know that. Self-driving cars are computers on wheels. There are a lot of computers onboard. The obvious computer is the one that runs the AI driving system (to clarify, there are likely numerous onboard computers devoted to the driving task). In addition, there are typically going to be fifty to a hundred other microprocessors involved in the numerous vehicular ECU (engine control unit) systems and subsystems. This entails the computers that keep the engine running, computers that control the brakes, computers that deal with the airbags, and so on. 

All of this makes the car go. Sadly, all of this also makes the car a really tempting target for cyber hacking.   

The cybersecurity “threat surface” of a self-driving car is immense. Lots and lots of ways can be tried to break into a self-driving car by an evildoer. I’ve previously and extensively covered the cybersecurity aspects of self-driving cars. The opportunities for doing really bad things are rife and the whole topic will make the hairs stand on the back of your head.   

The crucial question in this discussion focuses on how much regulatory action should there be about the cybersecurity provisions of self-driving cars. We can also toss into the mess that we need to ascertain the federal attention to the matter and how that fits into or conflicts with or compliments what the states want to do on this hefty matter.   

Round and round we go. 

Here’s a quick summary of the debate: “How much detail legislation should contain related to addressing cybersecurity threats, including whether federal standards should require vehicle technology that could report and stop hacking of critical vehicle software and how much information car buyers should be given about these issues” (per CRS R45985).   

Roving Eye Data Rights   

Issue: Figuring out the regulatory landscape encompassing stakeholder data rights of the self-driving car roving-eye   

Self-driving cars will have a communications device that allows for making a connection to a cloud computing platform that is presumably maintained by a fleet operator or the automaker or similar. This device is usually referred to as the OTA (Over-The-Air) electronic communications. 

Patches and various software updates for the AI driving system can readily be downloaded and installed via the OTA. This allows for rapid changes while self-driving cars are in the field and avoids having to bring the vehicle to a local dealer or repair shop to merely make system changes.   

The OTA will also provide a means to upload data from the self-driving car. Self-driving cars will be collecting gobs of data via their onboard sensor suite. This data can be pushed up into the cloud. Some uses will be to improve the AI driving system capabilities by analyzing driving journeys. There is also the likely monetization that will be quite profitable for those operating a fleet. I’ve referred to this as the “roving eye” and indicated that though it has some goodness possibilities, it also portends worries about privacy intrusion and like concerns.   

The data collected by a self-driving car could reveal where you went, how long you stayed at your destination, etc. Inward-facing video cameras could capture your entire time while inside the self-driving car. The outward-facing video cameras would capture you walking up to the self-driving car and walking away from it. Whenever a self-driving car goes down a neighborhood street, it will be collecting data that showcases the people standing in front of their homes, walking their dogs, playing ball with their kids.   

I ask you this, who owns or can decide what happens with that captured data? 

That is assuredly a whopping big can of worms.   

You might argue that the passenger ought to decide what happens with the data. There is an equally powerful argument that the owner of the self-driving car ought to make that decision. Perhaps the fleet operator should be able to decide. 

Round and round we go. 

Here’s a quick summary of the debate: “The extent to which vehicle owners, operators, manufacturers, insurers, and other parties have access to data that is generated by autonomous vehicles, and the rights of various parties to sell vehicle-related data to others” (per CRS R45985).   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

You are now primed and ready to engage in discussions and debates on these meaty topics. What are the right answers, you might be mulling? That’s up to all of us to decide. Start your engines and get engaged in the self-driving cars regulatory enigma. 

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

 

AI/ML Seen as Crucial to Battle Software Supply Chain Security Breaches  


Multiple modes of transport—planes, trains and automobiles—are needed to execute today’s complex supply chain, which also is a target of cyber criminals. (Credit: Getty Images) 

By John P. Desmond, AI Trends Editor  

The digital supply chain has been disrupted over the past year not only for COVID-19-related reasons, but also from cyberattacks involving ransomware, causing security professionals to explore the potential for AI and machine learning to further automate monitoring of the risk.  

The Kaseya ransomware attack in July, called a software supply chain security breach by some observers, was similar to the SolarWinds attack in the spring of 2020, in that malicious software was delivered to customers via an automatic software update.  

But the Kaseya attack was different in that the perpetrators made a specific ransomware demand, first for $45,000 from each company affected, then for $70 million to unlock all the affected systems. The SolarWind hackers gained access to the affected systems, where they were able to roam for months, with unknown intentions.   

Kaseya is a managed service provider, whose customers use it to help manage their IT infrastructure. Kaseya can deploy software to its systems under management, in a way roughly equivalent to software suppliers issuing automatic updates.  

A criminal gang named REvil reportedly based in Russia hacked into the Kaseya system and pushed the REvil software to all the systems under its management, according to a recent account in Lawfare entitled, “Why the Kaseya Ransomware Attack is a Really Big Deal.”  

The standard response procedure following deployment of malware in a zero-day exploit—in which the attacking malware has never before been seen—has been that security professionals, often from the affected software supplier, produce a patch, usually within a few days. That patch is then installed to remediate the threat.  

Matt Tait, chief operating officer, Corellium

The Kaseya attack requires a different type of response. “Malware deployed automatically via the supply chain upends all of these dynamics pathologically,” stated Matt Tait, author of the Lawfare account, and the current chief operating officer of Corellium. Tait has worked at Google on Project Zero, to find zero-day vulnerabilities, and at the British spy agency GCHQ. Corellium offers products to help developers work with Advanced RISC Machine (ARM) processors, used in smartphones and many other consumer electronic devices.    

“A malware operator with access to an automatic software delivery infrastructure has no incentive to keep the infections small,” Tait stated. Instead, rather than infecting a few targets at the top of its priority list, the digital software supply chain hacker can hit all the affected customers nearly simultaneously.  

Moreover, “Vendors can’t respond in the normal way to supply chain malware either,” Tait states, because the malware came from their own software delivery system. To remediate, they need to disable the infrastructure to prevent further misuse, and then work on securing their own systems. “Patches are the wrong tool for remediation” in this case, Tait suggests, since, “Patches help defend systems that might be vulnerable to malware, but here customers are already infected with the malware. By the time the breach is discovered, it’s already too late to fix via a patch.” 

Working toward solutions will be challenging. “Tackling this problem is no small task; it will need a great deal of resources and creativity across many different domains, from the technical community through to the foreign policy community,” stated Tait.  

Investors Back Interos with Another $100M to Help Manage Supply Chain Risk 

The investment community is taking notice, with its recent $100 million investment in Interos, a company offering supply chain risk management software incorporating AI and machine learning.  

Jennifer Bisceglie, CEO of Interos

“COVID-19 and other macro and digital supply chain disruptions over the past year have caused boards of directors and other leaders to awaken to the tremendous impact supply chain disruptions can have on operational resilience, business performance and reputation,” stated Jennifer Bisceglie, CEO of Interos, in a press release. “Manual and annual supply chain risk monitoring is urgently moving to automated and continuous, and that can only be accomplished through AI/ML-based technology. This funding will allow us to accelerate our mission of helping organizations fix supply chain issues before they cause operational disruption.”  

Interos aims to have its software serve as an early warning system to identify developing disruptions and supplier problems in real time. Founded in 2005, Interos has software in use by Fortune 500 brands, the US Department of Defense and NASA. The tools enable customers to map their global supply chains in multiple tiers and then continually monitor their suppliers.   

The Interos platform monitors for both physical and digital supply chain issues across  dozens of risk categories, including financial, operational, governance, geographic, and cyber factors. The platform also monitors environmental, social, and governance (ESG)-related risk factors, such as unethical labor practices and greenhouse gas emissions.   

The approach seems appropriate given the widespread impact of the Kaseya attack, which resulted in the shutdown of 800 supermarket locations that could not operate their checkout software, interrupted Swedish rail service, and disrupted the operations of a Swedish pharmacy chain, according to an account on the Interos blog  

McKinsey Sees AI As Needed to Help Manage Supply Chain  

That AI is a fit for managing more complex supply chains is also asserted in a recent report from McKinsey entitled, “Succeeding in the AI Supply Chain Revolution,” which describes longer and more interlinked physical flows, market volatility exacerbated by the COVID-19 pandemic and a focus on more supply chain resilience.  

“Supply-chain management solutions based on artificial intelligence (AI) are expected to be potent instruments to help organizations tackle these challenges,” state the authors, led by Knut Alicke, a partner in McKinsey’s office in Stuttgart, Germany. “AI’s ability to analyze huge volumes of data, understand relationships, provide visibility into operations, and support better decision making makes AI a potential game changer,” he stated.  

Read the source articles and information in Lawfarein a press release from Interos, from an account on the Interos blog and in the McKinsey report entitled, “Succeeding in the AI Supply Chain Revolution,” 

Alphabet Workers Union Giving Structure to Activism at Google 

Software engineer Alberta Devor pointed out that he joined the union “to ensure Google and Alphabet follow the slogan that our executives have actually abandoned: do not be wicked.”.

A variety of Alphabet Workers Union members publish remarks on the AWU site on why they signed up with the union and what they hope will be the result.

The Alphabet Workers Union is a minority union, representing a fraction of the business more than 260,000 full-time employee and specialists. The employees discussed at the outset that it was primarily an effort to supply structure to advocacy at Google, rather of to exercise for an agreement.

” So amongst the things that youre seeing in a location like Pittsburgh is a revival of interest in labor arranging as a method to regain a few of the equity related to income distribution that made use of to be connected with steel,” Madison mentioned.

The union is associated with the Communications Workers of America (CWA), a union representing employees in telecommunications and media in the US and Canada.
Sara Steffens, secretary-treasurer, Communication Workers of America” There are those who would desire you to think that organizing in the tech market is totally difficult,” specified Sara Steffens, CWAs secretary-treasurer, in an account in the New York Times.” If you do not have unions in the tech industry, what does that mean for our country? Thats one element, from CWAs perspective, that we see this as a priority.”.

Why Workers Joined the Union.

The minority union structure offers the union the ability to include Google professionals, who surpass full-time staff members some 121,000 to 102,000, according to a current New York Times account.

Software engineer Greg Edelston specified that he signed up with the union since, “I prefer to see Alphabet act as ethically as possible. The union provides a technique to affect Alphabets culture in the name of concepts.”.

The AWU declaration on the matter specified in part, “together these are an attack on the people who are attempting to make Googles technology more ethical.”.

From 1875 to when the last steel plant in Pittsburgh closed in 1984, regional unions had a good deal of sway and assisted employees to win high wages. Since, union membership has actually fallen significantly. University of Pittsburgh law teacher Mike Madison sees that workers have actually lost power in todays modern-day service economy.

More just recently, the National Labor Relations Board stated that HCL carried out more strict workplace guidelines after the union vote, left positions unfilled in Pittsburgh, and moved some work to Krakow, Poland. The NLRB in June provided a remedied problem, asking that HCL be purchased to bring back work sent abroad, according to a news release from the USW, which represents 850,000 workers in a series of markets consisting of innovation and services.

In the fall of 2019, some 65 Google agreement employees in Pittsburgh voted to form a union, seeking much better pay and some advantages. The members had actually been working for a Google professional, HCL, on the Google Shopping platform.

The HCL employees might have little choice. “There are no tight time structures within the law as it exists now, so employees can wind up bargaining for numerous years,” pointed out Celine McNicholas, director of federal government affairs at the Economic Policy Institute, in the 90.5 WESA account.

” If you do not have unions in the tech market, what does that mean for our nation? In the fall of 2019, some 65 Google arrangement employees in Pittsburgh voted to form a union, looking for far better pay and some benefits. The members had been working for a Google specialist, HCL, on the Google Shopping platform. From 1875 to when the last steel plant in Pittsburgh closed in 1984, regional unions had a great deal of sway and helped employees to win high earnings. One example is a statement released in January about the suspension of company gain access to for Margaret Mitchell, then a senior researcher at Google and head of its Ethical AI Team.

[Ed. Note: The slogan “do not be evil” came from Googles corporate standard procedure considering that 2000. When Google was rearranged under the Alphabet moms and dad service in 2015, the expression was tailored to “do the finest thing,” according to an account in Gizmodo.]
Parul Koul, software engineer and executive chair, Google Workers UnionParul Koul, a software engineer at Google, is the executive chair of the union. Spoken with by Bloomberg Businessweek in January, she pointed out, “The union is going to be a hugely crucial tool to bridge the divide between well-paid tech employees and experts who do not make as much.”.

In an extremely uncommon endeavor in the development market, the Alphabet Workers Union was formed by over 400 Google engineers and other employees in early January. The union now has about 800 members.

The Google union is seen as a “reliable experiment” for bringing unionization into a significant tech business by Veena Dubal, a law teacher at the University of California, Hastings College of the Law. “If it grows … it might have huge impacts not simply for the workers but for the more comprehensive issues that we are all thinking about in regards to tech power in society,” she specified.

If you do not have unions in the tech market, what does that mean for our nation? The Alphabet Workers Union is a minority union, representing a fraction of the companys more than 260,000 full-time staff members and experts. If you do not have unions in the tech industry, what does that mean for our country? In the fall of 2019, some 65 Google agreement employees in Pittsburgh voted to form a union, looking for much better pay and some benefits.

Due to the fact that they were being dealt with unjustly, Information expert Gabrielle Norton-Moore stated she and other HCL workers elected a union in the fall of 2019.

The Alphabet Workers Union formed in January by Google engineers to use structure to advocacy is establishing its voice. (Photo by hk on Unsplash).
By AI Trends Staff.

An increasing number of tech workers see “union democracy” as vital to having “more of a say in their work environment,” she mentioned,.

Given that the HCL contractors voted to affiliate with the United Steelworkers (USW), very little has occurred. Union organizer Ben Gwin specified HCL has been slow-walking the agreement settlements. “Theyre not delighted to please more than 2 times a month. Its insulting,” Gwin specified. “And it just appears like an overall joke that theyre not taking the procedure seriously.”.

The AWU concerns press launches on matters of interest. One example is a statement released in January about the suspension of service access for Margaret Mitchell, then a senior scientist at Google and head of its Ethical AI Team.

Pittsburgh Workers for Google Contractor HCL Voted to Unionize in 2019.

Check out the source posts and info in the New York Times, from 90.5 WESA in Pittsburgh, and a news release from the United Steelworkers.

Asked about the unions unbiased declaration that states, “We are responsible for the development that we bring into the world,” Koul specified, “That declaration means acting in solidarity with the rest of the world … I think we need to see ourselves as part of the working class. Otherwise, were going to end up being rich people simply protecting our own improvement.”.

For its part, Google management calls attention to its efforts with AI for Good and for racial equity. In a June post, Melonie Parker, Googles primary variety officer, pointed out business is devoted to doubling the variety of Black workers by 2025. She likewise mentioned a trainee loan repayment program to help Black employees, and partnerships with Historically Black Colleges and Universities (HCBUs) to widen access to college and chances in tech. She announced that 10 HCBUs will each get an unrestricted monetary grant of $5 million.

A search of the Google blog site discovered no recommendation of the Alphabet Workers Union.

” As companies come in and actually see Pittsburgh as a place to have a low cost of managing a highly-educated workforce, it opens the door to exploitation,” said Mariana Padias, the USWs assistant director of arranging. An increasing variety of tech workers view “union democracy” as important to having “more of a say in their office,” she mentioned,.

The USW has looked for to unionize more tech employees through its Federation of Tech Employees.

Talking Robot Boxes at Norwegian Hospital a Hit with Sick Kids 

In a new research study, scientists from the Norwegian University of Science and Technology (NTNU) analyzed how the robotics ended up being viewed as friendly, animal-like creatures and why that matters.
Roger A. Søraa, researcher, Norwegian University of Science and Technology (NTNU)” We discovered that these robotics, which were not established to be social robotics, were truly used social qualities by the humans connecting to them,” defined Roger A. Søraa, a scientist at NTNUs Department of Interdisciplinary Studies of Culture and Department of Neuromedicine and Movement Science, and extremely first author of the brand-new research study. “We tend to anthropomorphize developments like robotics– offering them humanlike characters– so we can put them into a context that were more comfortable with.”.

The robotic WALL-E in the animated sci-fi film of that name was a waste collector with a personality, which helped audiences associate with him. Today, talking work robotics are charming themselves to people. (Credit: Getty Images).
By John P. Desmond, AI Trends Editor.

Sarah Gray, the Trusts assistant general supervisor for centers, stated Ella “put a smile on the faces of a few of our youngest patients” with her routine.

LionsBot Founder Explains Its Smart Robot Features.

These motorized units, essentially boxes on wheels, are designated to bring trash, medical gadgets or food from one part of the health center to another. Nevertheless considering that they require to engage with human beings, such as by warning them to get out of the method, they have to talk.

LionsBot Cleaning Robots Tell Jokes.

LionsBot is utilizing AI within its LionsOS software application it calls Active AI, for dealing with application, mapping, scheduling and tracking.

A Singapore-based robotics organization that makes cleansing robotics for business, commercial, and public locations, has imbued them with a funny bone.

But rather of utilizing a generic Norwegian voice, the healthcare center robotic developers selected to offer a voice that makes use of the strong, unique local dialect, according to an account in the International Journal of Human-Computer Studies.

These are types of tasks that can typically be dull, dirty, or unsafe, or what we call 3-D jobs,” Søraa specified. And those are the jobs we are seeing ending up being robotized or digitalized the fastest.”.

” We discovered that utilizing the local dialect really provided the robotics more of a character,” he pointed out. “And individuals often like to offer non-living things human qualities to fit them within existing social frameworks.”.

St. Olavs chose in 2006 to purchase 21 automated assisted vehicles (AGVs) from Swisslog Healthcare, to do moving work, such as food from the treat bar to different medical facility systems, or clean linens to nursing stations. St. Olavs was the very first medical facility in Scandinavia to welcome the development.

When the talking robotic experiences the talking elevator, things get appealing. The AGV is configured to take over the elevator, perfectly asking individuals to “please usage another elevator” when it is onboard.

Ellas shipment may sound robotic, but this “riotous roomba” had social networks in stitches. “My 7 year old chuckled his head off at that joke! Excellent work, Ella!” made up one fan on Facebook, the Post reported. Others compared the joke-bot to Eve, the sweetie of the titular character from “Wall-E.”.

” The Earths rotation really makes my day” quips the maker. In another clip, the cleansing machine Ella mentions to a kid, “How do trees access the web?

The “Automated Guided Vehicles” at St. Olavs Hospital in Trondheim, Norway, have characters. Plans talk.

Kids with diseases who were being handled in the wards began to play video games with them, browsing for and acknowledge them. One mommies and daddy with a gravely ill child discovered solace in the robotics limitless, somewhat mindless fights as they unsuccessfully ordered inanimate items– like walls– to leave the approach.

Ai Trends sent various issues to the LionsBot group. Here are reactions from Dylan Ng Terntzer, CEO and Founder of LionsBot:.

A video demonstration of the “janitorial joke android” was just recently released to Facebook by the UKs Maidstone and Tunbridge Wells NHS Trust, which plans on leasing 2 of the LionsBot cleaners for their pediatric war in Kent, England, according to an account in The New York Post. In the clip, a cleaning bot named Ella is seen informing jokes to infirm kids during a trial run.

What is the story behind the sense of humour of the robotic?

LionsBot tries to find to establish clever robotics options that empower cleansing professionals and allowing them to focus on higher-level tasks. Provided that our robotics are regularly deployed to public areas where foot traffic is high– such as health care centers, shopping malls, museums, hotels, and universities– the group wished to present the robotics in a friendly, non-threatening method to its homeowners and visitors of the area.

The inclusion of humour and character have additional benefit, including improving the image of the institution where the robotics are released, together with using an unique and enjoyable way for kids to be and find out influenced about technology.

Who is configuring it?

LionsBots development is constructed and developed in-house at our head office in Singapore. In terms of the codes, all elements of the robotics characters are made up within the LionsOS– the cloud os that serves as the backbone of each of our robots.

Passersby can engage with a LionsBot cleaning robotic by scanning its QR code, which will permit them to ask the device concerns like “What is your name?” or “What type of cleaning do you perform?”. LionsBots robotics can likewise sing, rap, wink and even break jokes at the tap of a button.

As the cleaning robots are operating in offices and locations with higher density of people, we do not have microphones in our robots to avoid any unintended recording of private conversations. Thus, the robotic will not comprehend what the user is stating or respond to speech. The robotic instead, reacts to stimulus (e.g., pushing of the heart or stopping of the crobot) or commands.

Young kids can push the heart of the robotic to get a random reaction from the robotic. If you block it throughout cleaning mode, the robotic will similarly perfectly ask you to move. The cleaner can also activate the character through the app.

Is it part of the OS?

Our LionsOs is much more than just personality associated, nevertheless likewise governs how the robot reacts to its environments, the robotics understanding of its place (localisation), and the robots cleaning up behaviour. It is an extensive system which is continuously improved, making the robotics neat much better and work better.

Yes, it is. As pointed out, all elements of the robotics characters are a part of LionsOS. With our ingenious development in skilled system, LionsBot seeks to establish an enjoyable user experience all while positioning a strong concentrate on total security, total security, and overall services from the ground up.

Does the client have settings to customize the humour to the website?

Today, talking work robotics are charming themselves to humans. LionsBots robots can also sing, rap, wink and even split jokes at the tap of a button.

The researchers just recently launched a video to demonstrate the capabilities of the robotic they call Eva. It uses a generative design to produce a produced picture of the face, which can then output motor actions to produce the expression in the soft-skin robotic face. The group is led by Boyuan Chen, a PhD student in computer system science.
Dr. Sai Balasubramanian, doctor, focused on healthcare, digital development and policy” This technology is possibly a huge action into the health care professional system and robotics location,” specified the author of the Forbes account, Dr. Sai Balasubramanian, a physician focused on the crossways of healthcare, digital innovation and policy. “If this discovering algorithm can be enhanced to not only imitate human sensations, however rather likewise respond to them properly, it may possibly be a distinct, yet doubtful, addition to the world of healthcare development” he specified.

Yes, LionsBot uses a comprehensive set of paid customisation options for its users– including language together with localisation for its engagement functions. LionsBots robotics are presently offered in more than 20 nations and 7 languages (E.g., German, French, Polish) all over the world, with the business working completely with the customers to customise each robotic to make sure that they balance the local community from the beginning.

For considerable languages, we have the ability to enable the user to choose male or female voice packs, and also enjoyable or expert voices. For example, a 5-star hotel would require a professional voice as compared to a shopping mall. The robotic likewise has a collection of tunes and jokes.

Young kids can push the heart of the robot to get a random action from the robotic. As the cleansing robots are running in workplaces and locations with higher density of people, we do not have microphones in our robotics to prevent any unexpected recording of individual discussions. The group developed a physical animatronic robotic face with soft skin, tied to a vision-based self-supervised learning structure for mimicking human facial expressions.

For circumstances, social robotics have the potential to counter the destructive results of social privacy and privacy, specifically among the senior, in what is called “robo treatment.” The American Psychological Association states, “Socially assistive robotics could offer relationship to lonesome seniors” and enhance the practice of competent specialists.

A group at Columbia University is combining AI and advanced robotics to have a robotic imitate human facial expressions. The device uses animatronics, specified as a multidisciplinary field integrating puppetry, anatomy, and mechatronics, which are frequently seen in amusement park traveler destinations.

Read the source post and details in the International Journal of Human-Computer Studies, in The New York Post and in Forbes.

Animatronic Columbia University Robot Mimics Human Facial Expressions.

The robotic WALL-E in the animated sci-fi film of that name was a waste collector with a character, which assisted audiences relate to him. We discovered that utilizing the local dialect really offered the robotics more of a character,” he discussed. Ellas shipment may sound robotic, however this “riotous roomba” had social media in stitches. Young kids can push the heart of the robot to get a random response from the robotic. The team produced a physical animatronic robotic face with soft skin, connected to a vision-based self-supervised learning structure for simulating human facial expressions.

Researchers Working to Improve Autonomous Vehicle Driving Vision in the Rain 

Self-driving cars can have issue” seeing” in the rain or fog, with the vehicles noticing systems potentially blocked by snow, ice or torrential rainstorms, and their capability to” have a look at” roadway indications and road markings impaired.

The team places 2 radar sensors on the hood of the vehicle, enabling the system to see more area and detail than a single radar sensing unit. The team conducted tests to compare their systems effectiveness on clear days and nights, and after that with foggy climate condition simulation, to a lidar-based system. The result was the radar plus lidar system performed far better than the lidar-alone system.

” A terrific offer of automated automobiles nowadays are making use of lidar, and these are generally lasers that shoot out and keep relying on create points for a specific product,” specified Kshitiz Bansal, a computer science and engineering Ph.D. trainee at University of California San Diego, in an interview.

” So, for example, a vehicle that has lidar, if its going into an environment where there is a great deal of fog, it wont be able to see anything through that fog,” Bansaid mentioned. “Our radar can go through these bad environment condition and can even see through fog or snow,” he specified.

The group utilizes millimeter radar, a variation of radar that uses short-wavelength electro-magnetic waves to identify the variety, speed and angle of products.

To help self-governing automobiles navigate safely in the rain and other severe weather condition, researchers are taking a look at a brand-new kind of radar.

Researchers are studying methods to enhance the vision of autonomous automobiles operating in bad environment condition of rain, fog and snow. (Credit: Getty Images).
By John P. Desmond, AI Trends Editor.

The universitys self-governing driving research team is working on a brand-new approach to enhance the imaging ability of existing radar picking up units, so they more properly prepare for the shape and size of things in an independent automobiles view.
Dinesh Bharadia, professor of electrical and computer system engineering, UC San Diego Jacobs School of Engineering” Its a lidar-like radar,” stated Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, including that it is a cost effective technique. “Fusing lidar and radar can likewise be made with our methods, however radars are cheap. By doing this, we dont need to utilize expensive lidars.”.

Lots of self-governing vehicles count on lidar radar development, which works by bouncing laser beams off surrounding things to provide a high-resolution 3D photo on a clear day, however does not do so well in fog, snow, dust or rain, according to a current report from abc10 of Sacramento, Calif.

20 Partners Working on AI-SEE in Europe to Apply AI to Vehicle Vision.

In this case, the item produces a thick cluster of reflections that are not moving. Classical radar processing would evaluate the object as a railing, a broken down cars and trucks and truck, a highway overpass or some other product.

Improved self-governing vehicle vision is also the goal of a project in Europe– called AI-SEE– including startup Algolux, which is working together with 20 partners over a duration of 3 years to work towards Level 4 autonomy for mass-market lorries. Developed in 2014, Algolux is headquartered in Montreal and has really raised $31.8 million to date, according to Crunchbase.

Dr. Werner Ritter, Consortium Lead, Mercedes Benz AG: “Algolux is among the couple of companies worldwide that is well versed in the end-to-end deep neural networks that are required to decouple the underlying hardware from our application,” specified Dr. Werner Ritter, consortium lead, from Mercedes Benz AG. “This, together with the businesss comprehensive understanding of using their networks for robust perception in bad weather, straight supports our application domain in AI-SEE.”.

Especially, we trained a DNN to find moving and repaired things, together with specifically determine in between numerous sort of stationary difficulties, utilizing information from radar noticing units.

The capability of the self-governing cars and truck to discover what remains in movement around it is necessary, no matter the climate condition conditions, and the ability of the car to comprehend which products around it are stationary is likewise important, advises a current post in the Drive Lab series from Nvidia, an engineering have a look at specific autonomous vehicle troubles. Nvidia is a chipmaker best understood for its graphic processing systems, commonly made use of for improvement and release of applications using AI methods.

A deep neural network is an artificial neural network with many layers in between the input and output layers, according to Wikipedia. The Nvidia group trained their DNN to identify moving and repaired things, in addition to compare different types of fixed items, utilizing data from radar sensing units.

Great deals of stakeholders included in fielding safe autonomous trucks, discover themselves working on similar issues from their personal perspective. Some of those efforts are probably to result in important software application being offered as open source, in an effort to continually boost self-governing driving systems, a shared interest.

Check out the source posts and info from abc10 of Sacramento, Calif., from AutoMobilSport and in a blog website post in the Drive Lab series from Nvidia.

The approach causes improved results. “With this additional information, the radar DNN has the ability to distinguish in between numerous kinds of obstacles– even if theyre fixed– boost self-confidence of real favorable detections, and lower incorrect favorable detections,” the author specified.

The job will be co-funded by the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP), the Austrian Research Promotion Agency (FFG), Business Finland, and the German Federal Ministry of Education and Research BMBF under the PENTA EURIPIDES label backed by EUREKA.

“Fusing lidar and radar can likewise be done with our methods, however radars are low-cost. The group places 2 radar sensing systems on the hood of the car, permitting the system to see more location and info than a single radar sensor. Typical radar processing bounces radar signals off of items in the environment and takes a look at the strength and density of reflections that come back. Classical radar processing would evaluate the item as a railing, a broken down cars and truck, a highway overpass or some other things. Considering that radar reflections can be quite sporadic, its practically infeasible for humans to visually figure out and label vehicles from radar details alone.

Thinking about that radar reflections can be quite sparse, its virtually infeasible for humans to aesthetically recognize and label vehicles from radar data alone. Lidar data, which can produce a 3D picture of surrounding things using laser pulses, can supplement the radar details.

Nvidia Researching Stationary Objects in its Driving Lab.

The Algolux innovation makes use of a multisensory info blend approach, in which the sensor information gotten will be merged and simulated by methods of sophisticated AI algorithms customized to unfavorable weather condition understanding requirements. Algolux prepares to use technology and domain knowledge in the areas of deep knowing AI algorithms, mix of info from unique sensing unit types, long-range stereo discovering, and radar signal processing.

The group puts 2 radar sensors on the hood of the cars and truck, enabling the system to see more area and detail than a single radar sensing unit. “Fusing lidar and radar can similarly be made with our strategies, nevertheless radars are low-cost.”Fusing lidar and radar can also be done with our methods, however radars are low-cost. The group places two radar sensing units on the hood of the cars and truck, allowing the system to see more location and details than a single radar sensing unit. Considering that radar reflections can be rather sporadic, its virtually infeasible for humans to aesthetically determine and label cars and trucks from radar information alone.

The Nvidia laboratory is working on using AI to deal with the shortcomings of radar signal processing in recognizing moving and fixed items, with the goal of enhancing self-governing vehicle understanding.
In her position for about 4 years, she formerly worked as a systems designer for Teslas Autopilot software. Typical radar processing bounces radar signals off of things in the environment and analyzes the strength and density of reflections that return. If a thick and adequately strong cluster of reflections comes back, classical radar processing can determine this is likely some kind of huge item.

The intent is to construct an unique robust sensing unit system supported by expert system enhanced lorry vision for low existence conditions, to make it possible for safe travel in every significant weather condition and lighting condition such as snow, heavy rain or fog, according to a current account from AutoMobilSport.

Edge Cases And The Long-Tail Grind Towards AI Autonomous Cars 

One aspect to immediately discuss requires the truth that the AI included in todays AI driving systems is not sentient. To put it simply, the AI is entirely a cumulative of computer-based programming and algorithms, and a lot of surely unable to factor in the same manner that human beings can.

Would you perform the extremely exact same incredibly elusive actions when coming across a deer as you would when it pertains to the pet dog that remained in the middle of the highway?.

You are the accountable party for the driving actions of the truck, despite just how much automation might be tossed into a Level 2 or Level 3.

Each new instance becomes its own specific factor to consider. You would probably need to mentally recalculate what to do as the driver.

Considering that semi-autonomous cars require a human motorist, the adoption of those types of cars and trucks and trucks will not be noticeably various from driving basic automobiles, so theres not much new per se to cover about them on this topic (nevertheless, as youll see in a minute, the points next made are usually applicable).

Lets make another modification. Without having stated so, it was most likely that you assumed that the weather for these scenarios of the animal crossing into the street was fairly neutral. Maybe it was an intense day and the road conditions were rather plain or uneventful.

Doomsayers would show that self-driving cars and trucks are not going to successfully be prepared for public highway use till all edge or corner cases have in fact been dominated. Because vein, that future nirvana can be interpreted as the day and minute when we have actually completely cleared out and covered all the bases that furtively reside in the imperious long-tail of independent driving.

It would seem nearly self-evident that the number of mixes and permutations of prospective driving circumstances is going to be massive. We can quibble whether this is a limited number or an endless number, though in useful terms this is one of those counting predicaments similar to the variety of grains of sand on all the beaches throughout the entire world. In quick, it is a very, really, actually huge number.

The number of such twists and turns can we produce?.

Another way to specify an edge or corner case are the circumstances beyond the core or core of whatever our focus is.

For more about the levels as a sort of Richter scale, see my conversation here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ .

Understand the dangers of normalization of deviance when it concerns self-driving vehicles, heres my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   .

Do your driving options change now that the weather condition is negative?.

For semi-autonomous vehicles, it is needed that the general public requirements to be forewarned about a disturbing aspect thats been establishing lately, especially that despite those human drivers that keep releasing videos of themselves going to sleep at the wheel of a Level 2 or Level 3 vehicle, we all need to avoid being tricked into believing that the driver can remove their attention from the driving job while driving a semi-autonomous auto.

Thats a high order and a tale that might be notifying, or it might be a tail that is wagging the pet canine and we can find other methods to deal with those vexing edges and corners.

Essentially, you may be more risk-prone if the animal was a deer or a dog and want to put yourself at greater risk to save the deer or the pet canine. When the circumstance consists of a chicken, you may select that the individual danger versus the harming of the intruding animal is in a various way well balanced. Naturally, some would absolutely argue that the chicken, the deer, and the family pet canine are all comparable and vehicle drivers require to not try to divide hairs by specifying that an individual animal is more valuable than the other.

This can get extremely uncertain and undergo acrimonious discourse. Those scenarios that somebody claims are edge or corner cases might be more properly tagged as part of the core. Meanwhile, circumstances tossed into the core might be perhaps argued as more truly be taken into the edge or corner cases category.

The tires might not stick to the street due to the surface of water. All in all, the bad climate condition makes this an even worse situation.

Those that blankly describe the long-tail of self-driving automobiles and trucks can have the appearance of one-upmanship, holding court over those that do not understand what the long-tail is or what it includes. With the finest sort of indignation and tonal inflection, the hoity-toity speaker can make others feel incomplete or oblivious when they “naively” try to refute the famous (and infamous) long-tail.

In a type of Groundhog Day film method, lets repeat the circumstance, but we will make a small change. Are you ready?.

Lets dive into the myriad of aspects that refer to play on this subject.

One viewpoint is that human beings submit the areas of what they might comprehend by exploiting their capability of performing reasonable thinking. This operates as the always-ready competitor for handling unexpected scenarios. Todays AI efforts have really not yet been able to divide open how common-sense thinking seems to happen, and for that reason we can not, for now, rely upon this presumed essential backstop (for my protection about AI and common-sense reasoning, see my columns).

The other kicker is the matter of common-sense thinking.

Envision that you are driving your cars and trucks and truck and experience a deer that has in fact all of an abrupt darted into the street. Fewer people have actually had this occur, though nonetheless, it is a somewhat typical incident for those that reside in an area that has deer aplenty.

This certainly would be a laborious job if you were to try and set up an AI driving system based on each possible instance. Even if you consisted of a genuine herd of ace AI software application designers, you can definitely expect this would take years upon years to perform, likely numerous years or potentially centuries, and still be faced with the truth that there is another unaccounted edge or corner case remaining.

Self-Driving Cars And The Long Tail.

Comprehending The Levels Of Self-Driving Cars.

We can add more fuel to this fire by bringing up the concept of having a long tail. Individuals make use of the catchphrase “long tail” to explain situations of having an occurrence of something as a made up main or core and after that in an assumed ancillary sense have a lot of other aspects that tail off. You can mentally make a picture of a big bunched location on a chart and after that have a narrow spin-off that continues, winding up being an authentic tail to the bunched-up part.

For why remote piloting or operating of self-driving vehicles is generally shunned, see my description here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   .

Think of that you are driving your car and encounter an animal that has actually unexpectedly darted into the street. Presuming that all worked out, the pooch was fine and nobody in your automobile got injured either.

To be careful of counterfeit news about self-driving vehicles and trucks, see my pointers here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   .

The Level 4 efforts are slowly attempting to get some traction by going through selective and extremely narrow public highway trials, though there is controversy over whether this testing ought to be allowed per se (we are all life-or-death guinea pigs in an experiment occurring on our highways and byways, some complete).

The other side of that coin is the contention that simulations are based on what individuals believe might occur. As such, the reality can be unexpected in contrast to what human beings might generally imagine will occur. Those computer-based simulations will constantly then fall quick and not wind up covering all the possibilities, say those critics.

In regards to a chicken participating in the road, well, unless you live near a farm, this would appear a bit more extreme. On a day-to-day drive in a common city setting, you probably will not see great deals of chickens charging into the street.

There is not yet a true self-driving vehicle at Level 5, which we dont yet even understand if this will be possible to achieve, and nor for how long it will need to get there.

A plane landing on the road amidst cars and truck traffic would be a candidate for element to consider as an edge or corner case. A canine or deer or chicken that wanders into the roadway would be less most likely translated as an edge or corner case– it would be a more typical, or core experience. The former instance is amazing, while the latter instance is somewhat commonplace.

Prior to leaping into the details, I d like to clarify what is suggested when describing real self-driving vehicles.

That does not constantly need to be the case. Possibly they have a multitude of smaller items that just arent worth keeping around.

We can keep going.

The bulk of the car manufacturers and self-driving tech companies are making use of computer-based simulations to try and ferret out driving situations and get their AI driving systems all set for whatever may emerge. The belief by some is that if sufficient simulations are run, the totality of whatever will happen in the reality will have already been emerged and managed prior to getting in self-driving automobiles and trucks into the reality.

For my structure about AI autonomous cars and trucks and trucks, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   .

Modification that presumption about the conditions and consider that there have really been gobs of rain, and you remain in the midst of a heavy downpour. Your windscreen wiper blades can barely stay up to date with the sheets of water, and you are straining mightily to see the roadway ahead. The road is completely soaked and remarkably slick.

Heres an attractive concern that deserves pondering: Are AI-based real self-driving vehicles destined never ever be capable on our roads due to the limitless possibilities of edge or corner cases and the infamous long-tail conundrum?.

In general, the long tail should get its due and be supplied right examination. Integrating the concept of the long tail with the concept of the edge or corner cases, we may recommend that the edge or corner cases are lumped into that long tail.

http://ai-selfdriving-cars.libsyn.com/website .

You sit at the assisting wheel with those macroscopic mental design templates and invoke them when a particular situations occurs, even if the specifics are rather unexpected or unanticipated. If youve handled a family pet that was loose in the street, you likely have in fact formed a design template for when nearly any sort of animal is loose in the street, consisting of deer, chickens, turtles, and so on. You do not require to prepare ahead of time for every single animal in the world.

This concept is borrowed from the field of stats. There is a rather more precise significance in a purely analytical sense, however thats not how the bulk of people utilize the expression. The informal significance is that you may have great deals of less visible components that are in the tail of whatever else you are doing.

Lets repeat this as soon as again and make another change.

Ive prompted that there ought to be a Chief Safety Officer at self-driving car makers, heres the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ .

You see, for Level 4 self-driving automobiles, the designers are supposed to suggest the Operational Design Domain (ODD) in which the AI driving system is capable of driving the lorry.

One aspect that leaves attention often is that the core cases does not always need to be larger than the number or size of the edge cases. We simply presume that would be the practical plan. Yet it might be that we have a very little core and a significantly big set of edges or corner cases.

A rookie teenage chauffeur is frequently shocked by the variability of driving. They encounter one scenario that theyve not encountered prior to and enter into a little a short-term panic mode. What to do? Much of the time, they muddle their method through and do so without any scrape or catastrophe. Ideally, they discover what to do the next time that a similar setting emerges and hence be less caught off-guard.

The argument is that there are zillions of edge or corner cases that will constantly emerge all of an unexpected, and the AI driving system wont be prepared to deal with those instances. These examples bring up a dispute about the so-called edge or corner cases that can happen when driving an automobiles and truck. It has in fact also brought us directly back to the problem of what constitutes an edge or corner case in the context of driving a vehicle. You see, for Level 4 self-driving lorries, the developers are expected to suggest the Operational Design Domain (ODD) in which the AI driving system is capable of driving the lorry.

These driverless automobiles are thought about Level 4 and Level 5, while a cars and truck that requires a human driver to co-share the driving effort is typically considered at Level 2 or Level 3. The cars that co-share the driving task are referred to as being semi-autonomous, and usually include a range of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is a drifting argument that there ought not to be any public street trials happening of real self-driving cars up until the proper completion of substantial and apparently substantial simulations. The counterargument is that this is unwise due to the fact that it would postpone roadway screening on an indefinite basis, which the hold-up recommends more lives lost due to daily human driving.

Expect that suits are going to slowly become a considerable part of the self-driving cars and truck market, see my explanatory info here: https://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ .

Heres the rub. How do we choose what is on the edge or corner, rather than being categorized as in the core?.

Whenever a self-driving vehicle does something awry, it is easy to excuse the matter by claiming that the act was simply in the long tail. This shuts down anyone revealing problem about the misbehaviour. Heres how that goes. The contention is that any such problem or finger-pointing is lost because the edge case is simply an edge case, showing a low-priority and less weighty element, and not considerable in contrast to whatever the core includes.

Based upon that description, I depend on that you realize that the long tail can be rather essential, even if it does not get much apparent attention. The long tail can be the basis for a business and be exceptionally essential. If the company only keeps its eye on the hit, it might wind up in screw up if they neglect or ignore the long tail.

As an explanation, real self-driving vehicles are ones where the AI drives the auto completely by itself and there isnt any human assistance during the driving task.

One perspective is that it makes little sense to try and identify all the possible edge cases. Probably, human chauffeurs do not understand all the possibilities and no matter this absence of awareness have the ability to drive an automobiles and truck and do so safely the prevalence of the time. You could argue that individuals swelling together edge cases into more macroscopic collectives and deal with the edge cases as specific circumstances of those bigger conceptualizations.

How will self-driving autos handle edge cases and the long tail of the core?.

Self-driving car edge cases, such as unusual pedestrian crosswalk patterns, might always be outside the variety of the AI driving system to manage, some experts recommend. (Photo by Ryoji Iwata on Unsplash).
By Lance Eliot, the AI Trends Insider.

On the subject of off-road self-driving cars and trucks and trucks, heres my details extraction: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ .

Conclusion.

Some may claim that a deer is more probably to be trying to leave the street and be more likely to run towards the side of the roadway. The dog may choose to remain in the street and run around in circles. It is difficult to state whether there would be any noticable difference in habits.

For additional information about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ .

The mix and permutations can be excessive.

Returning to driving a vehicle, the dog or maybe a deer that dealt with the street proffers a driving incident or occasion that we probably would concur is someplace in the core of driving.

For some drivers, a chicken is a whole numerous matter than a deer or a pet. If you were going quick while in the vehicles and truck and there wasnt much latitude to readily avoid the chicken, it is possible that you would go on and ram the chicken. We generally accept the possibility of having chicken as part of our meals, hence one less chicken is ostensibly all right, particularly in contrast to the hazard of potentially rolling your automobiles and truck or diverting into a ditch upon abrupt braking.

This squishiness has another undesirable effect.

Those experts assert that no matter how tenaciously those heads-down major AI designers keep trying to configure the AI driving systems, they will continuously disappoint the mark. There will be yet another new edge or corner case to be had. It is like a computer game of whack-a-mole, in which another mole will appear.

There is likewise the haughtiness aspect.

In the middle of the heated arguments about using simulations, do not get lost in the fray and in some way reach a conclusion that simulations are either the last silver bullet or fall into the trap that simulations wont reach the biggest bar and ergo they should be totally disregarded.

These examples raise a disagreement about the so-called edge or corner cases that can happen when driving an automobiles and truck. A plane landing on the roadway amidst cars and trucks and truck traffic would be a candidate for factor to consider as an edge or corner case. It has actually likewise brought us directly back to the issue of what constitutes an edge or corner case in the context of driving a car. Most likely, human chauffeurs do not comprehend all the possibilities and in spite of this absence of awareness are able to drive an automobiles and truck and do so securely the preponderance of the time. You see, for Level 4 self-driving vehicles, the developers are expected to suggest the Operational Design Domain (ODD) in which the AI driving system can driving the automobile.

With that description, you can visualize that the AI driving system will not natively in some way “know” about the aspects of driving. Driving and all that it requires will need to be set as part of the software and hardware application of the self-driving car.

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ .

A company may have a core item that is considered its hit or primary seller. Ends up business has a great deal of other products too.

Why this consisted of focus about the AI not being sentient?.

Some fast to provide that possibly simulations would solve this circumstance.

When going over the function of the AI driving system, I am not ascribing human qualities to the AI due to the truth that I desire to highlight that. Please be mindful that there is a dangerous and continuous propensity nowadays to anthropomorphize AI. In essence, individuals are designating human-like life to todays AI, in spite of the inarguable and unassailable reality that no such AI exists as.

There are a lot more weaves on this topic. Due to location restraints, Ill use just a number of more snippets to additional whet your hunger.

This has in fact taken us cycle and returned us back to the angst over an unrestricted supply of edge or corner cases. It has likewise brought us squarely back to the concern of what makes up an edge or corner case in the context of driving a car. The long tail for self-driving vehicles is frequently described in a hand waving way. This unpredictability is spurred or triggered due to the lack of a definitive plan about what is indeed in the long tail versus what remains in the core.

The designers of AI driving systems can most likely try to use an equivalent technique.

The essential things is, this is not merely a computer game, it is a life-or-death matter because whatever a motorist does at the wheel of a vehicle can spell life or potentially death for the driver, and the travelers, and for drivers of surrounding cars and trucks, and pedestrians, and so on.

Self-driving autos are driven through an AI driving system. There isnt a requirement for a human driver at the wheel, and nor exists an arrangement for a human to drive the automobile.

Some experts fervently mention that we will never ever achieve real self-driving lorries due to the reality that of the long tail issue. The argument is that there are zillions of edge or corner cases that will constantly emerge all of an unexpected, and the AI driving system wont be prepared to handle those instances. This in turn recommends that self-driving lorries will be ill-prepared to efficiently bring out on our public streets.

Why this is a moonshot effort, see my description here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   .

When driving a car, these examples raise a disagreement about the so-called edge or corner cases that can take place. An edge or corner case is a reference to the situations of something that is considered uncommon or unusual. These are events that tend to take place as soon as in a blue moon: outliers.

The ethical implications of AI driving systems are significant, see my sign here: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   .

An allied subject involves utilizing closed tracks that are intentionally developed for the testing of self-driving automobiles. By being off the public roadways, a proving ground assurances that the general public at big is not threatened by whatever waywardness might emerge during driverless screening. The same arguments surrounding the closed track or revealing grounds approach resemble the tradeoffs mentioned when going over making usage of simulations (as soon as again, see my remarks published in my columns).

Make no error, simulations are necessary and an important tool in the pursuit of AI-based real self-driving cars and trucks and trucks.

For Level 4 and Level 5 genuine self-driving automobiles, there will not be a human driver related to the driving job. All occupants will be guests; the AI is doing the driving.

The useful view is that there would constantly be that last one that evades being preestablished.

Some similarly believe that the emerging ontologies for self-driving cars and trucks and trucks will help in this undertaking. You see, for Level 4 self-driving vehicles, the designers are supposed to recommend the Operational Design Domain (ODD) in which the AI driving system can driving the car. Possibly the ontologies being crafted toward a more conclusive semblance of ODDs would set off the kinds of driving action style templates required.

Well proceed.

Pretend that it is nighttime rather than being daytime. Picture that the setting includes no other traffic for miles.

Copyright 2021 Dr. Lance Eliot.

Visualize that you are driving your lorry and stumble upon a chicken that has all of an abrupt darted into the road. What do you do?.

As an example, there was a news report about a plane that arrived at a highway considering that of in-flight engine problems. I ask you, how numerous of us have seen a plane land on the highway in front of them?

Adverse Weather Creates Another Variation on Edge Cases.

Experiences in Smart City Challenges Such as in Columbus, Ohio, Sobering  

“We did see chauffeurs utilizing signals stemming from the connected lorry environment. Were seeing enhancements in driver behavior,” Bishop specified.

The app utilizes these open-source tools: OpenStreetMap to get current crowdsourcing information from the neighborhood, comparable to how the driving app Waze works; and OpenTripPlanner to find itinerary for different modes of transportation.

In 2015, Chinese organization took the leading 3 out of 4 rivals, and in June, Chinese tech companies Alibaba and Baidu swept the AI City Challenge, beating competitors from some 40 countries.

The city of Oakland, Calif. Is evaluating a self-governing traffic management platform from startup Hayden AI, with vision-based perception gizmos that try to plug into the citys transportation facilities. The goal of Hayden AI is to deliver trustworthy, sustainable and fair public transportation, according to a present account in Forbes.

Haydens gizmos are set up in numerous sort of lorries in the citys fleet, including transit buses, street sweepers, and trash trucks. Each perception gadget has exact localization, allowing it to map and find lane lines, traffic signal, street signs, fire hydrants, parking meters and trees. The details is used to establish a “digital twin,” a rich, 3D virtual design of the city.

The results were a reward of investments in clever cities by the government in China, which is performing pilot programs in numerous cities and has by some estimate half of the worlds sensible cities, according to a bank account in Wired.

Connected Vehicle Traffic Management Test Aimed at Distracted Driving.

The Pivot app had 3,849 downloads in late June; the city will continue to fund its advancement and use.

Engrave, a geospatial services start-up, was established in 2018 in Columbus. The organization got its start by working with Smart Columbus on a multimodal transportation app, called Pivot, to help users prepare journeys throughout central Ohio using buses, trip hailing, carpool, micro motion or private lorries.
Darlene Magold, CEO and co-founder, Etch” The movement issue in Columbus is access to mobility and people not comprehending or comprehending what options are easily offered to them,” specified Darlene Magold, CEO and co-founder of Etch, specified in an account in TechCrunch.” Part of our objective was to reveal the neighborhood what was readily available and provide choices to arrange those options based on cost or other info.”.

” Because we are open source, the combination with Uber, Lyft and other motion suppliers really provides users a lot of options, so they can in truth see what mobility choices are offered, aside from their own vehicle if they have one,” defined Magold. “It gets rid of that stress and anxiety of traveling” using different modes such as a scooter, bike, bus or Uber, she suggested.

His company is charged with continuing the work of the trouble. He stated the focus will be, “How do we utilize innovation to enhance quality of life, so resolve community issues of equity, to decrease environment adjustment and to obtain success in the region?”.

4 years previously, the extremely first global AI City Challenge was performed to promote the development of AI to support transportation facilities in a smarter way. Teams representing American service or universities took the top areas.

Take a look at the source posts and info in Wired, in TechCrunch and in Forbes.

“What Columbus did was test innovative ideas,” Miller stated. From October 2020 to March 2021, the city worked with Siemens, which provided onboard and roadside systems to produce a vehicle-to-infrastructure (V2I) and Vehicle-to-Vehicle (V2V) facilities. The city of Oakland, Calif. Is evaluating an independent traffic management platform from start-up Hayden AI, with vision-based understanding gadgets that try to plug into the citys transportation facilities.” The network of spatially mindful perception gadgets work together to establish a real-time 3D map of the city.

5 of the 8 tasks presented by the obstacle will continue, consisting of a citywide “operating system” to share info between government and personal entities, for the assistance of smart kiosks and parking and trip-planning apps.

The very first Smart City Challenge sponsored by the US Department of Transportation picked Columbus, Ohio, to get $50 million to be invested over five years to reshape the citys transport alternatives by benefiting from new innovation. A last report just recently issued by the citys Smart Columbus Program described the effort as promising and falling a bit brief.

Part of it was bad luck. Numerous programs set to get off the ground struck simply as the pandemic led to lockdowns in 2020, lessening requirement for transportation options.” It was not anticipated to be a competition for who has more sensors, or anything like that, and I believe we got a little sidetracked at a particular point,” stated Jordan Davis, director of Smart Columbus, to Wired.

The very first Smart City Challenge sponsored by the United States Department of Transportation selected Columbus, Ohio, to get $50 million to be invested over 5 years to improve the citys transport options by taking advantage of new development. The network of spatially mindful understanding devices work together to build a real-time 3D map of the city. “This can be used to make buses work on time by clearing bus lanes of parked automobiles, or help with city planning through better parking and curbside management.”.

Pivot App from Etch Helps Soft Multimodal Transportation Options.

Self-governing Traffic Management Platform Being Tried in Oakland.

” We desire the innovations to establish, because we want to enhance security and effectiveness and sustainability. Selfishly, we similarly desire this technology to establish here and improve our economy,” Caldwell defined in the Wired account.

In an effort to reduce mishaps credited to sidetracked driving, Columbus checked out with linked lorries. From October 2020 to March 2021, the city dealt with Siemens, which used onboard and roadside systems to develop a vehicle-to-infrastructure (V2I) and Vehicle-to-Vehicle (V2V) centers. The linked cars may “talk” to each other and to 85 intersections, seven of which had the greatest crash rates in main Ohio. The investment with the German multinational conglomerate was $11.3 million.
Mandy Bishop, program supervisor, Smart Columbus” We were taking an appearance at 11 different applications including red light signal warning, school zone notices, crossway collision warning, freight signal concern and transit signal issue, using the linked automobile technology,” specified Mandy Bishop, Smart Columbus program manager, to TechCrunch.

The option of Columbus triggered a flood of propositions from service that proved tough to handle. “A good deal of individuals were expecting a lot from this job, and perhaps extreme,” pointed out Harvey Miller, a location teacher and director of the Center for Urban and Regional Analysis at Ohio State University, who helped strategy and examine the difficulty. “What Columbus did was test innovative concepts,” Miller mentioned. “They discovered a lot about what works and what doesnt work.”.

Ghadiok applied his efficiency in robotics, computer system vision, and device understanding to establish the system, which has a variety of uses. For one, the system can identify open parking meters, notifying drivers to offered parking spaces nearby. Also, the platform can bring out traffic pattern analysis to determine pedestrian traffic through intersections by time of day, enabling shipment van to set up curb space deliveries more effectively.

Smart City Challenge rivals winner Columbus, Ohio supplied a last report exposing its five-year effort to innovate around transportation showed advancement. (Credit: Getty Images).
By AI Trends Staff.

China is investing more than the United States in places of emerging innovation, according to Stan Caldwell, executive director of Mobility21, a project at Carnegie Mellon University to assist smart-city improvement in Pittsburgh. AI researchers in the US can compete for government grants, from the National Science Foundations Civic Innovation Challenge, or the Department of Transportations Smart City Challenge.

” The network of spatially mindful understanding devices interact to construct a real-time 3D map of the city. These gadgets discover over time and from each other to provide info and insights that can be shared throughout city agencies,” mentioned Vaibhav Ghadiok, co-founder and VP of Engineering with Hayden AI. “This can be made use of to make buses deal with time by clearing bus lanes of parked automobiles, or aid with city preparation through much better parking and curbside management.”.

Is checking a self-governing traffic management platform from startup Hayden AI, with vision-based understanding gadgets that try to plug into the citys transport infrastructure. The network of spatially mindful understanding devices work together to establish a real-time 3D map of the city.

Last Report on Smart Columbus Cites Some Promising Endeavors.