Lyft’s new robotaxi will be a Hyundai Ioniq 5… but not really

Motional, an autonomous driving company formed last year as a joint venture between Hyundai and Aptiv, has spilled the beans on its first commercial vehicle: an electric robotaxi.

The so-called Ioniq 5 robotaxi is based on the homonymous vehicle by Hyundai, but enhanced with Motional’s driverless tech.

Motional Lyft Ioniq 5 robotaxi
Credit: Motional
The Ioniq 5 robotaxi.
Motional Lyft Ioniq 5 robotaxi
Motional Lyft Ioniq 5 robotaxi
Credit: Hyundai
And the simple Ioniq 5… yeah they do look alike.
Motional Lyft Ioniq 5 robotaxi

The vehicle comes equipped with Level 4 autonomy, meaning it can operate without a human driver under certain conditions.

It’s fitted with “more than 30 sensors,” including cameras, radar, and LiDAR, which provide “360-degree perception, high-resolution images, and ultra-long range detection of objects.”

That’s pretty decent, but lags a bit behind competitors such as Cruise and Waymo, whose vehicles feature more than 40 sensors.

Based on the Ioniq 5, the robotaxi is expected to have approximately 460km of range on a single charge and a 72.7kWh battery pack. Plus, it’s built on Hyundai’s new modular Electric-Global Modular Platform (E-GMP), which, as of 2021, will serve as the dedicated EV platform of all the Group’s brands, including Kia and Genesis. 

As per Motional, the E-GMP platform provides the vehicle with a spacious and comfortable interior, which will also feature various interfaces that will allow the passengers to interact with the taxi during the ride.

Starting 2023, the robotaxis will be deployed in various US cities through Lyft’s ride-hailing network, with Lyft being Motional’s partner since December 2020.

In fact, it’s quite interesting that while Lyft sold its self-driving division in April, it’s still keeping a tight grip on the robotaxi business. Motional is but the latest collaboration, preceded by Woven Planet and Ford.

Motional and Hyundai will debut the IONIQ 5 robotaxis for the first time publicly at IAA Mobility in Munich, in September 7-12. They’ve promised to reveal extra specs and it remains to be seen if the partnership with Lyft will prosper. 


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Why US data privacy laws go against what the people want

In 2021, an investigation revealed that home loan algorithms systematically discriminate against qualified minority applicants. Unfortunately, stories of dubious profit-driven data uses like this are all too common.

Meanwhile, laws often impede nonprofits and public health agencies from using similar data – like credit and financial data – to alleviate inequities or improve people’s well-being.

Legal data limitations have even been a factor in the fight against the coronavirus. Health and behavioral data is critical to combating the COVID-19 pandemic, but public health agencies have often been unable to access important information – including government and consumer data – to fight the virus.

We are faculty at the school of publichealth and the law school at Texas A&M University with expertise in health information regulation, data science and online contracts.

U.S. data protection laws often widely permit using data for profit but are more restrictive of socially beneficial uses. We wanted to ask a simple question: Do U.S. privacy laws actually protect data in the ways that Americans want? Using a national survey, we found that the public’s preferences are inconsistent with the restrictions imposed by U.S. privacy laws.

What does the U.S. public think about data privacy?

When we talk about data, we generally mean the information that is collected when people receive services or buy things in a digital society, including information on health, education and consumer history. At their core, data protection laws are concerned with three questions: What data should be protected? Who can use the data? And what can someone do with the data?

Our team conducted a survey of over 500 U.S. residents to find out what uses people are most comfortable with. We presented participants with pairs of 72 different data use scenarios. For example, are you more comfortable with a business using education data for marketing or a government using economic activity data for research? In each case, we asked participants which scenario they were more comfortable with. We then compared those preferences with U.S. law – particularly in terms of types of data being used, who is using that data, and how.

A graph ranking data use from most to least comfortable.A graph ranking data use from most to least comfortable.
A survey of around 500 U.S. residents showed that people are most comfortable with data use that supports a public good and least comfortable with data use that is focused on producing profits.
Cason D. Schmit, CC BY-ND

Under U.S. law, the type of data matters tremendously in determining which rules apply. For example, health data is heavily regulated, while shopping data is not.

But surprisingly, we found that the type of data companies or organizations use was not particularly important to U.S. residents. Far more important was what the data was being used for, and by whom.

The public was most comfortable with groups using data for public health or research purposes. The public was also comfortable with the idea of universities or nonprofits using data as opposed to businesses or governments. They were less comfortable with organizations using data for profit-driven or law enforcement purposes. The public was least comfortable with businesses using economic data to increase profits – a use that is widespread and loosely regulated.

Overall, our results show that the public tends to be more comfortable with altruistic uses of personal data as opposed to self-serving data uses. The law more or less promotes the opposite.

What’s allowed, what’s not

Ideally, data protection laws would limit the most risky data uses while permitting or even promoting beneficial, low risk activities. However, this is not always true.

For example, one federal law prevents sharing records of substance abuse treatment without an individual’s consent. It is, of course, beneficial in many cases to protect these sensitive records. However, during the ongoing opioid epidemic, these records could provide critical information on where and how to intervene to prevent overdose deaths. Worse yet, when only certain data is withheld for privacy, the remaining data can actually mislead researchers to make the wrong conclusions.

Sometimes, laws permit data use in ways that the U.S. public findstroubling. In most U.S. business contexts, using personal information for profit – for example, a company using personal information to predict customers’ pregnancies – is legal if this action is covered by a company’s privacy notice.

The American public’s awareness of and uneasiness with how companies use personal information has pushed lawmakers to explore new data regulations. Experts have argued that the status quo – a confusing patchwork of privacy laws – is inadequate, and some have argued for comprehensive privacy laws.

In the absence of federal legislation, some states have voted to put more comprehensive laws into place. California did in 2018 and 2020, Virginia and Colorado in 2021, and other states are likely to follow suit. If new laws are going on the books, we believe it is vitally important that the public has a say on what data uses should be restricted and which should be permitted.

How would good data privacy laws help?

Every year, hundreds of thousands of Americans die because of social factors like education, poverty, racism and inequality, and there are protected data sets that public health officials, researchers and policymakers could use to promote the common good.

The data-use case with the most public support is when researchers use education data for public health. Importantly, research shows that nearly 250,000 U.S. deaths annually can be attributed to low education – for example, a person having less than a high school diploma – and low education can contribute to poor nutrition, housing and work environments. But federal education privacy law limits groups from accessing education records for public health or any health research. In this case, U.S. data protection laws severely restrict researchers’ ability to understand these deaths or how to prevent them.

The data exists to better understand many other complex problems – like racism, obesity and opioid abuse – but data protection laws often get in the way of health authorities or researchers who want to use it.

Our research suggests that current legal barriers that prevent using data for the common good stand in stark contrast to the public’s wishes. As laws are revised or put into place, they could be designed to represent the public’s desires and facilitate research and public health. Until then, U.S. data privacy laws will continue to favor profit over the public good.

Article by Cason Schmit, Assistant Professor of Public Health, Texas A&M University; Brian N. Larson, Associate Professor of Law, Legal Argumentation and Rhetoric, Texas A&M University, and Hye-Chung Kum, Professor of Public Health, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

4 tech founders who will change the way you think about sex

TNW Conference 2021 is just a couple of weeks away, and we’re beyond excited! It’ll have all the essentials of a great tech festival: gorgeous venueamazing parties, and ideas that will change how you think about sex.

Wait, what?

Sex might not sound like a normal topic for tech conference, but our amazing line-up of speakers will show you how new technology can help fight for women’s sexual freedom and sex positivity.

Here are the four tech founders you’ll see at TNW2021 who will change the way you think about sex.

Jennifer Lyon Bell, Founder & Creative Director at Blue Artichoke Films

Credit: Blue Artichoke Films

“Sex is a force for unique self-expression, and for connection with the rest of humanity.”

With Blue Artichoke Films, Jennifer Lyon Bell creates award-winning erotic fiction movies, documentaries, and virtual reality experiences that portray the riveting, intimate, and sometimes delightfully awkward side of sex.

Her work shows that sex is a giant, beautiful, strange, messy, fascinating, fun, and thrilling force of nature. She encourages people to feel confident enough to trust their own desires and wants everyone to know that there’s nothing wrong with sexual pleasure.

Ti Chang, Co-founder and VP of Design at Crave

Credit: Products of Design

Having designed the first ever line of sex toy jewelry, Ti Chang as been a leading voice in bringing modern adult toys into the mainstream.

Her award-winning designs often provoke thoughtful conversations about pleasure and sexuality, and allow women to be able to enjoy their pleasure on their own terms.

The company also works on projects that create, instigate, and inspire conversation around pleasure. By funding these projects, Crave hopes to tackle some of most pressing forms of gender-based violence, including FGM and sex trafficking.

Together with Crave co-founder, Michael Topolovac, Ti took things to the next level when she started the CRAVE Foundation For Women in 2012. This foundation provides no-strings-attached grants to local and international individuals that help realize a world where pleasure is a universal human right.

Ela Darling, Marketing Director at ViRo Playspace

Credit: Ela Darling twitter

Perhaps best known for being the world’s first virtual reality camgirl, Ela Darling is a strong advocate of sexual safety, sex positivity, and freedom of self expression, sexual exploration, and self discovery.

She currently works as marketing director at ViRo Playspace, a sex-positive online platform where users can explore their adult-themed fantasies through the power of VR.

The platform guarantees safety, anonymity, and non-judgment, and is home to immersive experiences for a wide variety of kinks, like BDSM, furry, and others.

They’re also offering a range of innovative sex-tech that stay in perfect sync with what you see on screen — allowing the user to touch and be touched, live, from anywhere in the world, while in the comfort of their own home.

Suki Dunham, Founder of OhMiBod

Credit: OhMiBod

Widely known as an OG of the sex-tech revolution, Suki Dunham is the award-winning creative genius behind the iPod Vibrator (a music-driven vibrator), blueMotion (a Bluetooth enabled vibrator), and Lovelife Rev (a lightweight massager for those with dexterity challenges).

As the founder of OhMiBod, she is a driving force behind the evolution of teledildonics and other interactive technology for sexual health and wellness.

The company recently became the first in the world to develop a mobile app that can control vibrators remotely, allowing partners to connect and control one another’s pleasure products from anywhere in the world.

Want to hear more?

You can find these speakers and more like them at TNW Conference on September 30 & October 1.

If you haven’t purchased a ticket yet, be sure to grab yours today while you can still save some cash.

See you there!

Is anyone actually winning the hyperloop race?

It’s been almost ten years since Elon Musk brought hyperloop technology back into modern public consciousness with a 2013 research paper.

His Space X sponsored The Hyperloop Pod Competition spawning many ideas for the hyperloop technology in development today, with some teams scaling to become fully functional startups with significant funding. 

However, there is yet to be a commercially available hyperloop, and numerous companies are working on proprietary designs, mainly in stealth mode.

Every so often, they announce a partnership, release a video, or publish a feasibility study or research paper. It’s a trail of breadcrumbs and a lot of hype, but not always that much substance. 

I’ve put my detective hat on to look at the most prominent players in the competition to release the first commercial hyperloop. 

It’s worth stressing that hyperloop technology is forever changing as research evolves and the speed of innovation advances. Here are my thoughts on how they compare: 

HyperloopTT (US)

HyperloopTT is an American company where the team is part employees and crowdsourced professionals who commit their time and knowledge in exchange for stock options.

In 2019 HyperloopTT released The Great Lakes Hyperloop Feasibility Study, detailing their hyperloop’s economic and technical feasibility. The research claims that the passenger and freight market will generate sufficient revenue to pay all capital and operating costs with an economic return of 11.8% nominal. That project would not require any government subsidies.

HyperPort aims to remove up to 2800 shipping containers daily

In July, the company revealed HyperPort, a plug-and-play solution for port operators to increase capacity and efficiency while decreasing pollution and congestion. Partnering with terminal operator Hamburger Hafen und Logistik AG (HHLA), it plans to move 2,800 containers a day. The system is waiting for certification design review. 

Scorecard: Hyperloop TT is likely to be the first to bring its cargo solutions to market due to integration with existing ports.

Virgin Hyperloop One (US)

A design of the inside of a Virgin Hyperloop pod

In November 2020, Virgin Hyperloop successfully launched a two-seat prototype ridden by two members of staff. The hyperloop traveled 500 meters, reaching 172 kilometers per hour within 6.25 seconds. The company claims to have carried out over 500 tests in Las Vegas and recently released a new explainer video on Twitter.

 They’re planning routes in Dubai, India (between Mumbai and Pune), and in the US, North Carolina, and Texas. 

Scorecard: I rank them 1st for hype and PR. They’re good at visuals and videos. But I’m not sure a two-seat trip at a fraction of the speed or distance of an actual hyperloop is proof they’ll be the first in passenger journeys. 

DP World Cargo Speed (UAE)

Ok, this is kind of cheating in terms of competition as it means Virgin has two entries. In 2018 Virgin Hyperloop One and DP World joined forces to create DP World Cargospeed systems with a focus on carrying “high-priority, time-sensitive goods such as medical supplies and electronics.”

Not surprisingly, their focus is deliveries within Virgin Hyperloop’s planned routes. Their latest news was last December announcing a research collaboration between The Technology Innovation Institute and Virgin Hyperloop into R&D. 

Scorecard: Is it stealth or inactivity? They have the cargo chops, but we’ve not heard any hyperloop news since, so it’s unclear how close they are to getting pods in the tube. 

 Hardt hyperloop (The Netherlands)

Dutch company Hardt won the 2017 hyperloop competition. Their most significant difference from their competitors is a switch that allows the vehicle to pass from one track to another, creating a network of tubes connecting all European cities.

The company was involved in developing the European Hyperloop Center (EHC) in 2020, an open innovation center located in Groningen — the Netherlands. The EHC plans to open a 2.7km test track with a cargo-scale tube suitable for speeds up to 700kmph to test hyperloop technologies in 2023. 

The interior of a Hardt hyperloop pod prototype

They’ve created a life-size two-person pod prototype to roll out at events to drum up interest.

Scorecard: I suspect they are doing lots of behind-the-scenes work, especially regarding the regulations that need to occur to connect cities across the EU. One to watch.

Nevomo (Poland)

An illustration of a Nevomo magrail in action

Formerly called Hyper Poland, Nevomo is taking a different approach to others. They are creating High-Speed Railways, to later transform into a vacuum version — effectively an upgrade.

Nevomo has a three-step approach:

  1. A passive magnetic levitation train operating on existing railway tracks at speeds up to 415 kph. This hybrid solution allows for both the magrail system and conventional trains on the same tracks.
  2. The transformation of this train into a vacuum system called “hyperrail” with a top speed of 600 kph on existing tracks.
  3. Creation of new tracks for the hyperloop to enable their vehicles to travel at up to 1,200 kph.
Nevomo’s three stage strategy

 

The company leased land in June to build a full-scale 750m test track. From 2023, trains on Nevomo will be able to travel at a speed of 415 km per hour. 

Scorecard: One to watch as the company’s earlier stages are likely to fund the eventual hyperloop. I suspect we’ll be waiting a while, though. 

Transpod (Canada)

Canadian company Transpod has created a tube system different to the traditional hyperloop concept.

Simulation of a Transpod

Unlike a traditional hyperloop, the TransPod system uses moving electromagnetic fields to propel the pods with stable levitation off the bottom surface rather than compressed air. The technology sits on the pod rather than on the infrastructure. 

The company spent a chunk of time in 2020 designing ventilators for use by COVID-19 patients. They plan to operate across Canada and have tested their sub-systems in simulations and laboratory prototypes. 

The next step is physical tests on the entire integrated system.

Scorecard: I suspect they were hit hard by COVID-19 restrictions and pivoted to ventilators to keep the factories open. There’s nothing wrong with that. Like Nevomo, creating new tech within new tech takes a lot more work. 

The Boring Company (US)

The Boring Company is a tunnel construction and infrastructure company founded by Elon Musk in 2016. They dug a Test Tunnel underneath the SpaceX offices in Hawthorn, California, for the R&D of The Boring Company’s tunneling and public transportation systems in 2018. 

The LVCC Loop

In May 2019, the company landed a $48.7 million contract to design and construct a Loop system for the Las Vegas Convention Center (LVCC).

Walking between the New Exhibit Hall and North/Central Hall can take up to 15 minutes. Using the tunnel takes only one minute even if it’s not that exciting.

In July, the company proposed a tunnel between the area where most of Space X employees live and their Starbase in Boca Chica.

Scorecard: Eh, they’re making tunnels. So far, there’s no hyperloop in sight, just dudes driving people in the tunnels in Teslas. They’ll transition to AVs in the tunnels long before hyperloop, I suspect. 

How close are we to a commercial hyperloop? 

Don’t hold your breath. COVID-19 shutdowns pushed all of their earlier timelines back. 

Further, companies need to maintain the full complement of engineers, developers, architects, and other personnel. 

Getting hyperloop up to speed (literally and figuratively) will take some time. I believe we will see cargo applications long before passenger journeys. The former is likely to be a priority to comply with city sustainability and carbon reduction goals. 

 


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Everything you’ve wanted to know about hyperloop technology

You might have heard people talking about hyperloop technology. Most likely, they’re wondering why they are still talking about it and when/if it will arrive.

It’s a big topic, but we’ve got you covered with an easy-to-understand explanation of all you need to know.

Where has hyperloop tech come from? 

The hyperloop idea came into public consciousness in 2013 when Elon Musk introduced the concept in a research paper that posited the Hyperloop as “a fifth mode of transport after planes, trains, cars, and boats.”

It’s not a new idea. Mechanical engineer George Medhurst created and patented the idea of a railway to move people or cargo through pressurized or evacuated tubes in the 18th century. 

The idea of pressurized tubes has been further refined and developed in various iterations. The most commercially successful example is maglev trains in the 1960s. These are high-speed electric trains that use two sets of magnets: one set to repel and push the train up off the track, and another set to move the elevated train ahead, taking advantage of the lack of friction.

There’s also The Hyperloop Pod Competition, an annual competition sponsored by Space X to design and built a hyperloop prototype. Some of the winning teams have scaled to become fully functional startups.

However, there is yet to be a commercially available hyperloop, and numerous companies are working on proprietary designs. This means we have various visions of different hyperloop systems that deviate significantly from the Elon Musk hyperloop.

We are yet to see a convergence towards certain technologies or parameters. Besides, what is now possible in hyperloop technology will change extensively as research evolves and the speed of innovation advances. 

How does hyperloop technology work?

A conceptual drawing of the inner working of hyperloop tech

Hyperloop systems are long horizontal tubes. Electromagnetic force levitates and propels pods inside a near-vacuum tube to reduce air friction and drag. The pods effectively float on a frictionless magnetic cushion. The tube is a low-pressure environment that enables high speed for low energy consumption.

A linear induction motor makes it possible to move in a straight line.

 Within the tubes, pods travel for hundreds of kilometers at high speed, carrying passengers or cargo/cars between two locations — unlike trains, there are no stops along the way. The engineers can build tubes above or below ground. The aim is to have pods departing their destination in high rotation with low passenger wait times. 

hyperloop routing in action
Hyperloop pods are designed to depart at quick intervals across different routes
hyperloop routing in action

Why do we need a hyperloop?

Intercity and international travel are at capacity. There aren’t enough bus drivers. In many cities, overcrowded airports are unable to meet demand, and it’s often cheaper to fly than travel by train. Hyperloop promises to fill in that gap and deliver speed and sustainability to transport people and cargo alike.

What are the advantages of hyperloop tech?

Hyperloop technology offers several significant advantages over other modes of transport:

Speed


The technology aims to propel passenger or cargo pods at speeds of over 1000 km/h. This is 3x faster than high-speed rail and more than 10x faster than traditional rail.

Imagine a trip from San Francisco to Los Angeles in 30 minutes — a distance of around 559 km. Hyperloop travel from Frankfurt to Amsterdam is 439 km and would take a mere 50 minutes — it’s currently around 4 hours 26 minutes by car.

It’s as fast as a train but cheaper than conventional high-speed rail. 

Lower carbon emissions


The hyperloop offers low-energy long-distance travel, running on electricity and solar energy. Solar panels on the roof of above-ground hyperloops could generate energy. The tubes could also store electricity with the help of batteries. 

Further, freight traveling by hyperloop would alleviate the high carbon emissions of trucks. 

Weatherproof


The hyperloop is less vulnerable to bad weather such as rain, snow, wind, and earthquakes. There’s no risk of train tracks buckling due to the heat in summer as with high-speed rail. 

Less invasive


It’s easier to add layers of tunnels than a lane to a road. According to the Boring Company,stations could be as small as two parking spaces and thus easily integrated into city centers, parking garages, and residential areas.

Compliments current and future transport


The Hyperloop shape provides the capacity to build other transport above or below the hyperloop tube, such as moving sidewalks, walkways, and e-scooter and cycle paths.

City planners can create designs to combine access to flying taxis, autonomous vehicles and hyperloops. 

Hyperloop criticism: What are the disadvantages of hyperloop technology? 

While hyperloop technology promises a new fifth mode of transport, it has several negatives: 

Costs

It’s hard to price the construction and infrastructure costs. For example, seals are paramount to the pod hatches and doors. They will require regular maintenance — problematic when you consider most cities struggle to maintain bridges, train tracks, and roads in the first instance. 

Land acquisition is a significant challenge. A report into the commercial feasibility of hyperloop by NASA shared a cost of $25 – $27 million per mile for just the technology, excluding land acquisition, with the cost of an almost entirely underwater track specifically from Helsinki to Stockholm at the expense of $64 million per mile including vehicles.

Comparatively, California high-speed rail costs anywhere from $63 to $65 million per mile, and in Europe, the cost is $43 million per mile. However, those figures include costs of land acquisition but exclude train sets. 

Hyperloop Safety


Safety is critical. Delft Hyperloop published a report in July 2020 which contends that the European hyperloop system needs at least the safety level of European commercial airlines in terms of passenger fatalities per passenger-kilometer. The report is design-agnostic and addresses potential safety scenarios in detail: 

Fire safety
While the low-pressure environment prevents fire from breaking out in the tubes, a fire inside a pod is a real threat. The report recommends a system where mist and fire suppressants release automatically when it detects smoke.

Communication system challenges
How do you communicate within and to a hyperloop pod? The steel tube prevents wireless signals from reaching the pod. Further, due to the high speeds, pods often switch between communication cells, increasing the chance of handover failure, and temporary communication loss. 

One option is Li-FI, a mobile wireless technology that uses light rather than radio frequencies to transmit data.  LiFi is simpler than radio frequency communication and uses direct modulation similar to remote control units. LED light bulbs have high intensities and, therefore, can achieve substantial data rates.

Li-Fi is useful in environments that do not easily support Wi-Fi, such as aircraft cabins, hospitals, and hazardous environments.

However, cybersecurity is a significant threat to the hyperloop. The use of optical fibers could prevent hackers from physically intercepting communication signals. 

Emergency evacuation
Evacuating a hyperloop is difficult as the tubes are designed to have a limited number of exits. The goal of evacuation is to enable passengers to exit the hyperloop safely.

Two ideas are posited:

  1. In-tube evacuation, which requires a locally pressurized tube through which passengers can walk towards the nearest emergency exit.
  2. Nearby pods can pick up passengers to speed up the evacuation process.

Or a combined evacuation that joins internal safe havens and in-tube evacuations together. 

Security
Hyperloop technology makers anticipate pods that depart every 30 seconds to two minutes. However, the shorter the gap between pod departures, the greater the risk for pile-up in the event of an accident.

Frequent departure pods require constant passenger flow. This makes strict passenger and baggage screening difficult. 

The road from here to Hyperloop 

Credit: Hyperloop Transportation Technologies
Hyperloop technology has a long way to become reality

Despite the success of autonomous bullet trains in much of the world, many of today’s potential passengers greet the idea of traveling in a carriage in a tube as our ancestors would have viewed air travel – risky and unsafe.

For hyperloop to succeed, it requires a willing public. There are huge practical and psychological challenges before hyperloops becomes mainstream. 

There are a fair number of companies creating their proprietary technology and a lot of questions remain  —

Hyperloop technology offers a compelling glimpse into a future where people are highly mobile and opportunity is not limited by geography. The cumulative innovation and convergence of engineering, design, and software is creating transportation unimaginable decades ago. 

 


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Climate change is an infrastructure problem, just look at this EV charger map

Most of America’s 107,000 gas stations can fill several cars every five or 10 minutes at multiple pumps. Not so for electric vehicle chargers – at least not yet. Today the U.S. has around 43,000 public EV charging stations, with about 106,000 outlets. Each outlet can charge only one vehicle at a time, and even fast-charging outlets take an hour to provide 180-240 miles’ worth of charge; most take much longer.

The existing network is acceptable for many purposes. But chargers are very unevenly distributed; almost a third of all outlets are in California. This makes EVs problematic for long trips, like the 550 miles of sparsely populated desert highway between Reno and Salt Lake City. “Range anxiety” about longer trips is one reason electric vehicles still make up fewer than 1% of U.S. passenger cars and trucks.

This uneven, limited charging infrastructure is one major roadblock to rapid electrification of the U.S. vehicle fleet, considered crucial to reducing the greenhouse gas emissions driving climate change.

It’s also a clear example of how climate change is an infrastructure problem – my specialty as a historian of climate science at Stanford University and editor of the book series “Infrastructures.”

infrastructure, climate change
Credit: The Conversation
Distribution of EV charging stations in the US.
infrastructure, climate change

Over many decades, the U.S. has built systems of transportation, heating, cooling, manufacturing and agriculture that rely primarily on fossil fuels. The greenhouse gas emissions those fossil fuels release when burned have raised global temperature by about 1.1°C (2°F), with serious consequences for human lives and livelihoods, as the recent report from the U.N. Intergovernmental Panel on Climate Change demonstrates.

The new assessment, like its predecessor Special Report on Global Warming of 1.5°C, shows that minimizing future climate change and its most damaging impacts will require transitioning quickly away from fossil fuels and moving instead to renewable, sustainable energy sources such as wind, solar and tidal power.

That means reimagining how people use energy: how they travel, what and where they build, how they manufacture goods and how they grow food.

Gas stations were transport infrastructure, too

Gas-powered vehicles with internal combustion engines have completely dominated American road transportation for 120 years. That’s a long time for path dependence to set in, as America built out a nationwide system to support vehicles powered by fossil fuels.

Gas stations are only the endpoints of that enormous system, which also comprises oil wells, pipelines, tankers, refineries and tank trucks – an energy production and distribution infrastructure in its own right that also supplies manufacturing, agriculture, heating oil, shipping, air travel and electric power generation.

Without it, your average gas-powered sedan wouldn’t make it from Reno to Salt Lake City either.

Lines of cars wait for gas pumps at a busy station.
Credit: Jim Watson/AFP via Getty Images
Gas-powered vehicles have dominated U.S. road transportation for 120 years and have a web of infrastructure supporting them.
Lines of cars wait for gas pumps at a busy station.

Fossil fuel combustion in the transport sector is now America’s largest single source of the greenhouse gas emissions causing climate change. Converting to electric vehicles could reduce those emissions quite a bit. A recent life cycle study found that in the U.S., a 2021 battery EV – charged from today’s power grid – creates only about one-third as much greenhouse gas emissions as a similar 2021 gasoline-powered car. Those emissions will fall even further as more electricity comes from renewable sources.

Despite higher upfront costs, today’s EVs are actually less expensive than gas-powered cars due to their greater energy efficiency and many fewer moving parts. An EV owner can expect to save US$6,000-$10,000 over the car’s lifetime versus a comparable conventional car. Large companies including UPS, FedEx, Amazon and Walmart  are already switching to electric delivery vehicles to save money on fuel and maintenance.

climate change infrastructure problem
Credit: EIA
The annual carbon emissions in the US since 1975.
climate change infrastructure problem

All this will be good news for the climate – but only if the electricity to power EVs comes from low-carbon sources such as solar, tidal, geothermal and wind. (Nuclear is also low-carbon, but expensive and politically problematic.) Since our current power grid relies on fossil fuels for about 60% of its generating capacity, that’s a tall order.

To achieve maximal climate benefits, the electric grid won’t just have to supply all the cars that once used fossil fuels. Simultaneously, it will also need to meet rising demand from other fossil fuel switchovers, such as electric water heaters, heat pumps and stoves to replace the millions of similar appliances currently fueled by fossil natural gas.

The infrastructure bill

The 2020 Net-Zero America study from Princeton University estimates that engineering, building and supplying a low-carbon grid that could displace most fossil fuel uses would require an investment of around $600 billion by 2030.

The infrastructure bill now being debated in Congress was originally designed to get partway to that goal. It initially included $157 billion for EVs and $82 billion for power grid upgrades. In addition, $363 billion in clean energy tax credits would have supported low-carbon electric power sources, along with energy storage to provide backup power during periods of high demand or reduced output from renewables. During negotiations, however, the Senate dropped the clean energy credits altogether and slashed EV funding by over 90%.

Of the $15 billion that remains for electric vehicles, $2.5 billion would purchase electric school buses, while a proposed EV charging network of some 500,000 stations would get $7.5 billion – about half the amount needed, according to Energy Secretary Jennifer Granholm.

As for the power grid, the infrastructure bill does include about $27 billion in direct funding and loans to improve grid reliability and climate resilience. It would also create a Grid Development Authority under the U.S. Department of Energy, charged with developing a national grid capable of moving renewable energy throughout the country.

The infrastructure bill may be further modified by the House before it reaches President Joe Biden’s desk, but many of the elements that were dropped have been added to another bill that’s headed for the House: the $3.5 trillion budget plan.

As agreed to by Senate Democrats, that plan incorporates many of the Biden administration’s climate proposals, including tax credits for solar, wind and electric vehicles; a carbon tax on imports; and requirements for utilities to increase the amount of renewables in their energy mix. Senators can approve the budget by simple majority vote during “reconciliation,” though by then it will almost certainly have been trimmed again.

Overall, the bipartisan infrastructure bill looks like a small but genuine down payment on a more climate-friendly transport sector and electric power grid, all of which will take years to build out.

But to claim global leadership in avoiding the worst potential effects of climate change, the U.S. will need at least the much larger commitment promised in the Democrats’ budget plan.

Like an electric car, that commitment will seem expensive upfront. But as the recent IPCC report reminds us, over the long term, the potential savings from avoided climate risks like droughts, floods, wildfires, deadly heat waves and sea level rise would be far, far larger.

This article by Paul N. Edwards, William J. Perry Fellow in International Security and Senior Research, Stanford University,is republished from The Conversation under a Creative Commons license. Read the original article.

GPT-3 mimics human love for ‘offensive’ Reddit comments, study finds

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your online ticket now!

Chatbots are getting better at mimicking human speech — for better and for worse.

A new study of Reddit conversations found chatbots replicate our fondness for toxic language. The analysis revealed that two prominent dialogue models are almost twice as likely to agree with “offensive” comments than with “safe” ones.

Offensive contexts

The researchers, from the Georgia Institute of Technology and the University of Washington, investigated contextually offensive language by developing “ToxiChat,” a dataset of 2,000 Reddit threads.

To study the behavior of neural chatbots, they extended the threads with responses generated by OpenAI’s GPT-3 and Microsoft’s DialoGPT.

They then paid workers on Amazon Mechanical Turk to annotate the responses as “safe” or “offensive.” Comments were deemed offensive if they were intentionally or unintentionally toxic, rude, or disrespectful towards an individual, like a Reddit user, or a group, such as feminists.

The stance of the responses toward previous comments in the thread was also annotated, as “Agree,” “Disagree,” or “Neutral.”

“We assume that a user or a chatbot can become offensive by aligning themselves with an offensive statement made by another user,” the researchers wrote in their pre-print study paper.

Bad bots

The dataset contained further evidence of our love for the offensive.  The analysis revealed that 42% of user responses agreed with toxic comments, whereas only 13% agreed with safe ones.

They also found that the chatbots mimicked this undesirable behavior. Per the study paper:

We hypothesize that the higher proportion of agreement observed in response to offensive comments may be explained by the hesitancy of Reddit users to engage with offensive comments unless they agree. This may bias the set of respondents towards those who align with the offensive statement.

This human behavior was mimicked by the dialogue models: both DialoGPT and GPT-3 were almost twice as likely to agree with an offensive comment than a safe one.

Chatbots mimic human tendency towards agreeing with offensive comments.
Credit: Baheti et al.
Reddit users were more likely to reply to offensive comments.
Chatbots mimic human tendency towards agreeing with offensive comments.

The responses generated by humans had some significant differences.

Notably, the chatbots tended to respond with more personal attacks directed towards individuals, while Reddit users were more likely to target specific demographic groups.

The distribution of target group frequencies.
Credit: Baheti et al.
The top 10 target groups for Reddit user responses, DGPT responses, and GPT-3 responses. Target groups are organized in decreasing frequency in each decagon, starting clockwise from the top-right corner.
The distribution of target group frequencies.

Changing behavior

Defining “toxic” behavior is a complicated and subjective task.

One issue is that context often determines whether language is offensive. ToxiChat, for instance, contains replies that seem innocuous in isolation, but appear offensive when read alongside the preceding message.

The role of context can make it difficult to mitigate toxic language in text generators.

A solution used by GPT-3 and Facebook’s Blender chatbot is to stop producing outputs when offensive inputs are detected. However, this can often generate false-positive predictions.

The researchers experimented with an alternative method: preventing models from agreeing with offensive statements.

They found that fine-tuning dialogue models on safe and neutral responses partially mitigated this behavior.

But they’re more excited by another approach: developing models that diffuse fraught situations by “gracefully [responding] with non-toxic counter-speech.”

Good luck with that.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Bose’s QuietComfort 45 headphones cancel more noise and last longer

Bose‘s QuietComfort 35 were some of the most popular headphones of the last few years due to their intense noise canceling, cozy design, and solid sound quality (don’t @ me audiophiles).  Today the company released an updated version of the classic, the fittingly-named QuietComfort 45.

The headphones are priced at $329, which puts them a notch below the $380 Bose 700 that was the ostensible successor to the QuietComfort line (they’re also a little cheaper than the QC35 II, which cost $349 at launch). They offer a new noise cancellation system that is better at eliminating mid-range frequencies “typically found in commuter trains, busy office spaces, and cafes”; it sounds like the headphones are better at eliminating chatter, and not just the low rumble of an air conditioner or airplane engine.

Meanwhile, a new (to the QuietComfort line, anyway) ‘Aware’ mode makes it easy to hear your environment and hold conversations without taking the headphones off. It’s a common feature in noise-canceling headphones these days, but it’s still nice to see it properly implemented here.

Credit: Bose

Bose says the QC45 is also using its noise cancelation tech for calls, with beam-forming microphones to isolate your voice while ignoring distractions “like a coffee grinder or barking dog” — actually pretty common background sounds in my own home, not gonna lie.

The design is very much reminiscent of earlier QuietComfort models, with a lightweight plastic design that folds flat for travel. They’re also a good choice for those who prefer physical buttons over touch-sensitive earcups; there are physical buttons to control volume, playback, pairing, and access to features like voice assistants and noise-canceling settings.

As a welcome bonus, the headphones support multi-point pairing (AKA being connected to multiple devices at once), and they can use Bose’s SimpleSync to pair with Bose’s soundbars when you want to watch TV silently. The battery is claimed to last for 24 hours, and the headphones finally charge via USB-C. Thank Goodness.

The headphones are up for pre-order today in both black and off-white colorways and will be available on September 23.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

World’s first cobalt-free EV batteries finally launched — here’s why it matters

Chinese EV battery manufacturer SVOLT has unveiled what it claims to be the world’s first cobalt-free battery.

At Chengdu Motor Show the company showcased its innovative battery pack inside an Ora Cherry Cat from Chinese automaker Great Wall Motors. 

And that’s definitely good news.

SVOLT cobalt-free EV battery
Credit: Ora
The Ora Cherry Cat.
SVOLT cobalt-free EV battery

Why we need cobalt alternatives

As the world moves towards reducing carbon emissions, demand for electrified transport has been rising.

Most of today’s electric vehicles use lithium-ion batteries, which require cobalt during their production and operation. That’s because cobalt is crucial for boosting energy density and battery life, as it keeps the structure of lithium ions stable.

Unfortunately, cobalt’s value comes with a cost.

For starters, cobalt is a finite resource, while demand for it is rising. According to Cobalt Institute’s latest report, demand for cobalt in lithium-ion batteries, used primarily in portable electronics and electric vehicles, has increased at an annual rate of 10% between 2013 and 2020.

The Institute also predicts a higher rise in cobalt batteries, as its projections show that demand for EVs will see a 30% annual growth by 2025. Therefore, it’s not unreasonable to wonder whether the global cobalt reserves will be sufficient to keep up with the increasing production pace.

What’s more, the precious mineral is largely concentrated in the Democratic Republic of Congo, which accounts for 66% of the global mine supply. Sadly, it is estimated that 20% of the mines in the DRC use mostly child workers.

Finally, scientists, after assessing the life cycle of the cobalt extraction route, have found that using the mineral isn’t 100% green.

Instead, their research suggests that blasting and electricity consumption in cobalt mining is damaging to the environment, while cobalt mining also affects global warming, as it results in carbon dioxide and nitrogen dioxide emissions. 

Going cobalt-free

For all the above reasons, many automakers have been trying to cut down on cobalt for their EV batteries.

For instance, Tesla announced during its 2020 Battery Day that it will make cobalt-free electric vehicles. Also last year, GM unveiled its Ultium Battery that uses 70% less cobalt than other current batteries on the market.

Now SVOLT promises a fully cobalt-free battery with a capacity of 82.5KWh, which can deliver a 600km range on a single charge, and can allow a car to accelerate from zero to 60mph in under five seconds. 

The company said that its sustainable product is expected to go on sale in China, but hasn’t offered any specific timeline on when that might happen.


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

 

How China is restricting kids’ online gaming to a mere 3 hours a week

Welcome to another episode of China regulating technology, and today’s news is about gaming restrictions. In a set of new rules, the country has restricted children to only three hours of online gaming per week.

If you think this means kids can play games for three hours any time of the week, you’re wrong. The administration has allowed online gaming from 8PM to 9PM on Friday, Saturday, Sunday, and public holidays. This feels like my parents setting dedicated time for TV around school exams.

These rules build on restrictions issued in 2019, where children aged under 18 were allowed 90 minutes of game time per day, no gaming after 10PM, and a cap on in-app purchases.

Game providers like Tencent and NetEase have been warned that they can’t serve games to minors outside the designated time. Plus, login and registration are mandatory for all users to play any game.

Last year, China rolled out its real-name authentication system for games. Under those rules, game publishers need to verify users via their names and national IDs (akin to a social security number), and, based on the age monitor, and restrict gameplay.

A photo from the Honor of Kings world championship
A photo from the Honor of Kings world championship
A photo from the Honor of Kings world championship

With these systems already in place, game publishers will need to just tweak their timings. But it might take some hit on their revenue. China also appealed to parents and schools to spread education amongst children about online gaming.

China’s crackdown on game addiction is not sudden. While the authorities have taken initiatives to promote eSports, they’ve also been aggressive to curb gaming addiction. A state media outlet published an article this month labeling Honour of Kings, a multiplayer battle arena game from Tencent with more than 100 million followers, as ‘spiritual opium.’

The game publishers might be unhappy, but they’re following these rules strickly. Comapnies like Tencent have been using facial scanning tech to catch minors who might be breaking rules to play games.

Daniel Ahmad, a gaming analyst at Niko Partners, told the Finaincial Times, that the policy is “extremely restrictive.” He noted that according to Tencent, players under 16 count for 2.6% of player spend, so there would be some impact on that revenue. Bloomberg analysts also noted that that companies such as Tencent and NetEase will have moderate financial impact due to these new restrictions. 

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Instagram will demand your date of birth for safety — and ads, of course

Instagram is making a new change to its policy, that will require users to submit their date of birth to the company. If you don’t share your birthday after repeated reminders, you won’t be able to use the service.

The company said that this feature will help it improve user safety, and allow the platform to build out more features to that end. Over the past few months, Instagram has built a ton of tools related to teen safety. In March, it rolled out an AI-powered feature to prevent adults from messaging kids. Last month, it made all new accounts aged under 16 private by default.

If you haven’t added your birthday while joining in, Instagram will show you a notification related to it multiple times — like the one shown below.

Instagram will remind you to share your birthday through a notification
Instagram will remind you to share your birthday through a notification
Instagram will remind you to share your birthday through a notification

The company also plans to restrict certain content based on different age groups. It’ll ask you to share your birthday — if you haven’t shared it — before seeing content that might be sensitive.

However, there’s a quirk to this new change. Instagram wasn’t shy in admitting that this will help it show “you more relevant ads.” This bit irks me. I get that as a social media platform, you need to add age restriction as a feature, but using that information for ads is a snarky move. But then again, we’re talking about a Facebook-owned company.

There’s also no information as to if advertisers will get access to the information about you being an adult, or get specific birthdates too. We’ve asked the company to share more details, and we’ll update the story if we hear back.

You can read more about Instagram‘s birthday sharing policy here.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Excel autocorrect errors are plaguing gene research

Auto-correction, or predictive text, is a common feature of many modern tech tools, from internet searches to messaging apps and word processors. Auto-correction can be a blessing, but when the algorithm makes mistakes it can change the message in dramatic and sometimes hilarious ways.

Our research shows autocorrect errors, particularly in Excel spreadsheets, can also make a mess of gene names in genetic research. We surveyed more than 10,000 papers with Excel gene lists published between 2014 and 2020 and found more than 30% contained at least one gene name mangled by autocorrect.

This research follows our 2016 study that found around 20% of papers contained these errors, so the problem may be getting worse. We believe the lesson for researchers is clear: it’s past time to stop using Excel and learn to use more powerful software.

Excel makes incorrect assumptions

Spreadsheets apply predictive text to guess what type of data the user wants. If you type in a phone number starting with zero, it will recognize it as a numeric value and remove the leading zero. If you type “=8/2”, the result will appear as “4”, but if you type “8/2” it will be recognized as a date.

With scientific data, the simple act of opening a file in Excel with the default settings can corrupt the data due to auto-correction. It’s possible to avoid unwanted auto-correction if cells are pre-formatted prior to pasting or importing data, but this and other data hygiene tips aren’t widely practiced.

In genetics, it was recognized way back in 2004 that Excel was likely to convert about 30 human gene and protein names to dates. These names were things like MARCH1, SEPT1, Oct-4, jun, and so on.

Several years ago, we spotted this error in supplementary data files attached to a high impact journal article and became interested in how widespread these errors are. Our 2016 article indicated that the problem affected middle and high ranking journals at roughly equal rates. This suggested to us that researchers and journals were largely unaware of the autocorrect problem and how to avoid it.

As a result of our 2016 report, the Human Gene Name Consortium, the official body responsible for naming human genes, renamed the most problematic genes. MARCH1 and SEPT1 were changed to MARCHF1 and SEPTIN1 respectively, and others had similar changes.

Example list of gene names in ExcelExample list of gene names in Excel
An example list of gene names in Excel.

An ongoing problem

Earlier this year we repeated our analysis. This time we expanded it to cover a wider selection of open access journals, anticipating researchers and journals would be taking steps to prevent such errors appearing in their supplementary data files.

We were shocked to find in the period 2014 to 2020 that 3,436 articles, around 31% of our sample, contained gene name errors. It seems the problem has not gone away and is actually getting worse.

Small errors matter

Some argue these errors don’t really matter, because 30 or so genes are only a small fraction of the roughly 44,000 in the entire human genome, and the errors are unlikely to overturn to conclusions of any particular genomic study.

Anyone reusing these supplementary data files will find this small set of genes missing or corrupted. This might be irritating if your research project examines the SEPT gene family, but it’s just one of many gene families in existence.

We believe the errors matter because they raise questions about how these errors can sneak into scientific publications. If gene name autocorrect errors can pass peer-review undetected into published data files, what other errors might also be lurking among the thousands of data points?

Spreadsheet catastrophes

In business and finance, there are many examples where spreadsheet errors led to costly and embarrassing losses.

In 2012, JP Morgan declared a loss of more than US$6 billion thanks to a series of trading blunders made possible by formula errors in its modeling spreadsheets. Analysis of thousands of spreadsheets at Enron Corporation, from before its spectacular downfall in 2001, show almost a quarter contained errors.

A now-infamous article by Harvard economists Carmen Reinhart and Kenneth Rogoff was used to justify austerity cuts in the aftermath of the global financial crisis, but the analysis contained a critical Excel error that led to omitting five of the 20 countries in their modeling.

Just last year, a spreadsheet error at Public Health England led to the loss of data corresponding to around 15,000 positive COVID-19 cases. This compromised contact tracing efforts for eight days while case numbers were rapidly growing. In the healthcare setting, clinical data entry errors into spreadsheets can be as high as 5%, while a separate study of hospital administration spreadsheets showed 11 of 12 contained critical flaws.

In biomedical research, a mistake in preparing a sample sheet resulted in a whole set of sample labels being shifted by one position and completely changing the genomic analysis results. These results were significant because they were being used to justify the drugs patients were to receive in a subsequent clinical trial. This may be an isolated case, but we don’t really know how common such errors are in research because of a lack of systematic error-finding studies.

Better tools are available

Spreadsheets are versatile and useful, but they have their limitations. Businesses have moved away from spreadsheets to specialized accounting software, and nobody in IT would use a spreadsheet to handle data when database systems such as SQL are far more robust and capable.

However, it is still common for scientists to use Excel files to share their supplementary data online. But as science becomes more data-intensive and the limitations of Excel become more apparent, it may be time for researchers to give spreadsheets the boot.

In genomics and other data-heavy sciences, scripted computer languages such as Python and R are clearly superior to spreadsheets. They offer benefits including enhanced analytical techniques, reproducibility, auditability, and better management of code versions and contributions from different individuals. They may be harder to learn initially, but the benefits to better science are worth it in the long haul.

Excel is suited to small-scale data entry and lightweight analysis. Microsoft says Excel’s default settings are designed to satisfy the needs of most users, most of the time.

Clearly, genomic science does not represent a common use case. Any data set larger than 100 rows is just not suitable for a spreadsheet.

Researchers in data-intensive fields (particularly in the life sciences) need better computer skills. Initiatives such as Software Carpentry offer workshops to researchers, but universities should also focus more on giving undergraduates the advanced analytical skills they will need.The Conversation

The Conversation

Article by Mark Ziemann, Lecturer in Biotechnology and Bioinformatics, Deakin University and Mandhri Abeysooriya, , Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Toyota’s e-Palette hits a blind Paralympian: is ‘semi-automation’ to blame?

Need a way to turn people off autonomous vehicles (AVs)? That’s what happened with Toyota’s e-Pallette at the Tokyo Paralympics games this week. One of the company’s self-driving transportation collided with and injured a visually impaired pedestrian. The company suspended all self-driving e-Palette transportation pods at the Games village in response.

What exactly happened?

While Toyota claims the accident happened during manual control, the company has suspended the use of all e-Palette self-driving pods.

In a YouTube video, Toyota Chief Executive Akio Toyoda explained that the vehicle was under manual control at the time of the accident.

The driver used the control joystick to stop at a T junction and then — when attempting to turn — hit the athlete going at around 1 or 2 kilometers an hour.

Luckily, the athlete wasn’t badly hurt and was able to walk back to their residence after receiving medical attention. Toyota is cooperating with local police to determine the cause of the accident and will conduct its own investigation.

To date, it’s not clear whether the vehicle, the operator holding the joystick, or a combination of both, that caused the accident.

Passengers aren’t the only ‘client’

The Toyota e-Palette features large doors and electric ramps for quick and easy boarding.

Toyota’s attention to accessibility when it comes to the e-Palette is commendable, but you could argue it didn’t do enough.

For the people inside the vehicle, it’s great. The e-Palette is a low-speed self-driving pod at SAE level 4. It features large sliding large doors, low floors, and electric ramps which eases access. It’s also capable of transporting up to four passengers in wheelchairs along with standing room, which is impressive. 

But when it comes to people outside the vehicle it gets trickier.

The e-Palette has headlamps designed to mimic eye contact, which is neat… but isn’t much help to the visually impaired. That’s why a fundamental pillar of car-pedestrian communication is that cars need to be seen and heard

Since 2019, electric vehicles makers have been required to include an Acoustic Vehicle Alerting System (AVAS). This is in response to claims they are too quiet to be heard by blind people or their guide dogs

Now it’s of course still unclear if the pedestrian was able to hear the car or not in this incident, but Toyota’s Chief Executive did say it showed that “autonomous vehicles are not yet realistic for normal roads.”

Is the problem automation… or semi-automation?

The ToyotaChief’s comment is understandable — and frankly correct — but it also skirts around one of the bigger issues: people. 

When car-makers promised us the dream of self-driving vehicles, it was effectively cars that drive themselves while we take a nap or lead a meeting.

[Read:  The Taliban love Toyota… but why?]

But what we’ve got so far is semi-autonomous vehicles like the e-Palette where a driver sits behind the wheel. The driver is ideally hyper-alert with their hands at the ready to take over control. But are drivers are capable of interacting safely with semi-automation?

In most cases, yes. However, semi-autonomous vehicles (with the help of humans) have been killing people since 2018. when a safety driver in a self-driving Uber failed to notice a pedestrian until it was too late.

Tesla is also currently under scrutiny after 11 cases of their vehiclescolliding with emergency vehicles. There was another parked-car collision just this week.

While these vehicles are equipped with alarms, deceleration, and other whistles to ensure driver vigilance, they are hardly a posterchild for further automation. 

Do I trust machines more than humans? Kinda

If Toyota’s e-Palette was fully autonomous, it wouldn’t have to rely on the driving skills or the focus of humans. In some respects, I trust a road full of completely autonomous vehicles more than a mix of semi-autonomous and zero automation cars. 

Why? Because humans are unpredictable. They drive while drunk and stoned, and speed for fun. Autonomous vehicles do not. So if we can work out the details (ok, there’s quite a few) like their ability to distinguish between stop signs on the streets and on trucks — and to be able to identify and stop for humans as well as animals — things might just get interesting.

But there’s no way AVs will roll out without the gradual increase of autonomous functionality. A lot of drivers don’t seem able to cope with the current level of automation. This is the rub and I can’t see a way to reconcile this.

Meanwhile, we see more and more crashes and other incidents that fail to convince the average person of the future safety of autonomous vehicles. The road between L4 and L6 automation is long, and it’s proving painful.

Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Tesla on Autopilot crashes into two parked cars… again…

On Saturday morning, a 2019 Tesla Model 3, reportedly on Autopilot, crashed into two parked cars in Orlando, Florida.

The Orlando division of Florida Highway Patrol (FHP) tweeted the following: 

According to FHP’s report, at the time of the incident, a trooper had stopped to help another driver whose 2012 Mercedes GLK 350 was disabled at the side of the Interstate 4 in Orlando.

Tesla crash Autopilot Orlando Florida
Credit: Florida Highway Patrol
The 2019 Tesla Model 3 after the crash.
Tesla crash Autopilot Orlando Florida

The highway officer had already stepped out of the police vehicle, a 2018 Dodge Charger, when the Tesla run into it. First, it hit the left side of the police car, and then it hit the Mercedes. 

Fortunately, there were no fatalities. According to the Associated Press, the 27-year-old Tesla driver and the driver of the disabled Mercedes sustained minor injuries, while the police officer remained unhurt.

[Did you know SHIFT is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of mobility during TNW Conference 2021. Secure your ticket now!]

CNBC reports that the Model 3 driver told officers that she was using Autopilot when the collision took place. Nevertheless, the incident will be under investigation in order to determine whether Autopilot caused or contributed to the crash.

Tesla crash Autopilot Florida
Credit: Florida Highway Patrol
The police vehicle that got hit.
Tesla crash Autopilot Florida

The police have informed the National Highway Traffic and Safety Administration (NHTSA) and Tesla — which hasn’t provided any comments yet — about the incident.

Sadly, it seems that Autopilot accidents are happening too often to be ignored.

Just two weeks ago, the US government, led by the NHTSA, opened an official probe into Tesla’s Autopilot, prompted by a series of crashes involving Teslas and emergency vehicles. The investigation will cover Model Y, X, S, and 3 vehicles released from 2014 through 2021, amounting up to 765,000 units. 

It’s also hopeful that, following the investigation, Democratic senators Richard Blumenthal and Ed Markey asked the Federal Trade Commission to look into Tesla’s claims about its Autopilot and Full Self-Driving capabilities.

Especially the second initiative is very hopeful, as simply identifying potential software deficiencies isn’t enough.

What lies at the core of this problem is the misconception that Autopilot can deliver fully autonomous driving. And while Tesla has been recently offering warnings on its software’s limitations, we should nevertheless by wary of the “autonowashing” that usually lurks in its marketing strategy. 


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up? 

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

New satellite tech could reveal the cause of the rare milky seas

 

“The whole appearance of the ocean was like a plain covered with snow. There was scarce a cloud in the heavens, yet the sky … appeared as black as if a storm was raging. The scene was one of awful grandeur, the sea having turned to phosphorus, and the heavens being hung in blackness, and the stars going out, seemed to indicate that all nature was preparing for that last grand conflagration which we are taught to believe is to annihilate this material world.”
– Captain Kingman of the American clipper ship Shooting Star, offshore of Java, Indonesia, 1854

For centuries, sailors have been reporting strange encounters like the one above. These events are called milky seas. They are a rare nocturnal phenomenon in which the ocean’s surface emits a steady bright glow. They can cover thousands of square miles and, thanks to the colorful accounts of 19th-century mariners like Capt. Kingman, milky seas are a well-known part of maritime folklore. But because of their remote and elusive nature, they are extremely difficult to study and so remain more a part of that folklore than of science.

I’m a professor of atmospheric science specializing insatellites used to study Earth. Via a state-of-the-art generation of satellites, my colleagues and I have developed a new way to detect milky seas. Using this technique, we aim to learn about these luminous waters remotely and guide research vessels to them so that we can begin to reconcile the surreal tales with scientific understanding.

Sailors’ tales

To date, only one research vessel has ever encountered a milky sea. That crew collected samples and found a strain of luminous bacteria called Vibrio harveyi colonizing algae at the water’s surface.

Unlike bioluminescence that happens close to shore, where small organisms called dinoflagellates flash brilliantly when disturbed, luminous bacteria work in an entirely different way. Once their population gets large enough – about 100 million individual cells per milliliter of water – a sort of internal biological switch is flipped and they all start glowing steadily.

Luminous bacteria cause the particles they colonize to glow. Researchers think the purpose of this glow could be to attract fish that eat them. These bacteria thrive in the guts of fishes, so when their populations get too big for their main food supply, a fish’s stomach makes a great second option. In fact, if you go into a refrigerated fish locker and turn off the light, you may notice that some fish emit a greenish-blue glow – this is bacterial light.

Now imagine if a gargantuan number of bacteria, spread across a huge area of open ocean, all started glowing simultaneously. That makes a milky sea.

While biologists know a lot about these bacteria, what causes these massive displays remains a mystery. If bacteria growing on algae were the main cause of milky seas, they’d be happening all over the place, all the time. Yet, per surface reports, only about two or three milky seas occur per year worldwide, mostly in the waters of the northwest Indian Ocean and off the coast of Indonesia.

An image showing four different panels, with a swoosh shape apparent in all of them.An image showing four different panels, with a swoosh shape apparent in all of them.
Researchers found a milky sea event off the coast of Somalia, seen here as a pale swoosh in the top left image. The other panels show sea surface temperature, ocean currents and chlorophyll. Steven D. Miller/NOAA
 

Satellite Solutions

If scientists want to learn more about milky seas, they need to get to one while it’s happening. Trouble is, milky seas are so elusive that it has been almost impossible to sample them. This is where my research comes into play.

Satellites offer a practical way to monitor the vast oceans, but it takes a special instrument able to detect light around 100 million times fainter than daylight. My colleagues and I first explored the potential of satellites in 2004 when we used U.S. defense satellite imagery to confirm a milky sea that a British merchant vessel, the SS Lima, reported in 1995. But the images from these satellites were very noisy, and there was no way we could use them as a search tool.

We had to wait for a better instrument – the Day/Night Band – planned for the National Oceanic and Atmospheric Administration’s new constellation of satellites. The new sensor went live in late 2011, but our hopes were initially dashed when we realized the Day/Night Band’s high sensitivity also detected light emitted by air molecules. It took years of studying Day/Night Band imagery to be able to interpret what we were seeing.

Finally, on a clear moonless night in early 2018, an odd swoosh-shaped feature appeared in the Day/Night Band imagery offshore Somalia. We compared it with images from the nights before and after. While the clouds and airglow features changed, the swoosh remained. We had found a milky sea! And now we knew how to look for them.

A satellite image of a massive, question mark-shaped white area off the coast of a brightly lit island.A satellite image of a massive, question mark-shaped white area off the coast of a brightly lit island.
This milky sea off the coast of Java was the size of Kentucky and lasted for more than a month. Steven D. Miller/NOAA

 

The “aha!” moment that unveiled the full potential of the Day/Night Band came in 2019. I was browsing the imagery looking for clouds masquerading as milky seas when I stumbled upon an astounding event south of the island of Java. I was looking at an enormous swirl of glowing ocean that spanned over 40,000 square miles (100,000 square km) – roughly the size of Kentucky. The imagery from the new sensors provided a level of detail and clarity that I hadn’t imagined possible. I watched in amazement as the glow slowly drifted and morphed with the ocean currents.

We learned a lot from this watershed case: how milky seas are related to sea surface temperature, biomass, and the currents – important clues to understanding their formation. As for the estimated number of bacteria involved? Approximately 100 billion trillion cells – nearly the total estimated number of stars in the observable universe!

Two satellite images of Java showing a large question mark-shaped area of light-colored sea surface.
The two images on the left were taken with older satellite technology while the images on the right show the high-definition imagery produced by the Day/Night Band sensor. Steven D. Miller/NOAA
Two satellite images of Java showing a large question mark-shaped area of light-colored sea surface.

The future is bright

Compared with the old technology, viewing Day/Night Band imagery is like putting on glasses for the first time. My colleagues and I have analyzed thousands of images taken since 2013, and we’ve uncovered 12 milky seas so far. Most happened in the very same waters where mariners have been reporting them for centuries.

Perhaps the most practical revelation is how long a milky sea can last. While some last only a few days, the one near Java carried on for over a month. That means that there is a chance to deploy research craft to these remote events while they are happening. That would allow scientists to measure them in ways that reveal their full composition, how they form, why they’re so rare and what their ecological significance is in nature.

If, like Capt. Kingman, I ever do find myself standing on a ship’s deck, casting a shadow toward the heavens, I’m diving in!

Article by Steven D. Miller, Professor of Atmospheric Science, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What robot swarms can teach us about making collective decisions

 

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your ticket now!

You find a new restaurant with terrific food, but when you suggest meeting there in a group text to your friends, the choice to meet at the same old place carries the day.

Next time, you should consider persuading your friends one by one, rather than reaching out to the group as a whole.

Research conducted by my colleagues and me using swarms of robots suggests that this less-is-more strategy of distributing information over time can increase the probability of getting a group to choose the best option. Our results could make it easier to develop microscopic robots that work inside the body and could have implications for how information spreads on social media.

Our robot swarm study looked at how opinions spread in large populations. We found that a population of uninformed individuals can cling to outdated beliefs and fail to adopt better available alternatives when information about the new options spreads to everyone all at once. Instead, when individuals only share the information one by one, the population can better adapt to changes and reach an agreement in favor of the best option.

Keeping it simple

In our study, published in July 2021 in the journal Science Robotics, we set up a swarm of autonomous robots that make collective decisions on the best available alternatives and operate in an environment that changes over time. We found that less was more: robot swarms with reduced social connections – meaning the number of other robots they can communicate with – adapted more effectively than globally connected swarms. This runs counter to the common belief in network science that more connections always lead to more effective information exchange. We show that there are situations when the opposite occurs.

Each Kilobot is less than an inch and a half (3.8 cm) in diameter and height and communicates by infrared light. We programmed 50 of the robots with very simple behaviors: random movements to explore the environment and basic voting rules to exchange opinions. The robot swarm scans an unknown environment and collectively selects the best site; for example, the site best suited for building a structure. Each robot develops its own opinion from its scans of the environment and regularly checks the opinion of a single random neighbor. If a robot receives a conflicting opinion, it resets its own opinion by polling other robots. This allows the swarm to reach consensus without getting deadlocked.

Several dozen small plastic discs containing electronics and perched on metal wire legs are spread across a smooth featureless surfaceSeveral dozen small plastic discs containing electronics and perched on metal wire legs are spread across a smooth featureless surface
Experiments with swarms of robots have shown that sporadic social interactions can increase the spread of newly discovered information, compared to sharing the information with all members of a group at once. Andreagiovanni Reina, CC BY-ND

 

The simplicity of the individual behavior isn’t merely a matter of convenience in our study. It’s key to building robot swarms of the future. These include swarms with very small robots like microscopic robots that operate in the body, robots with simple components like biodegradable robots for cleaning the ocean, and low-budget, single-use robots like those that could be damaged or destroyed in disaster sites. Robot swarms with minimal behaviors can also be a viable option for robots that operate without human supervision in otherwise inaccessible locations.

Nature knows the rule

To write the algorithms that control our robots, we built mathematical models that explain the spreading of opinions in populations of socially connected, uninformed individuals. This process is similar to collective decision-making in other settings, including animals and humans.

Specifically, our algorithm is inspired by the behavior of European honeybees when they collectively select the site to build their future nest. Bees interact locally with one another and exchange voting messages by vibrations. The bee colony makes decisions without any central authority.

Similar collective decisions can be observed in schools of fish, who seem to know the less-is-more rule. In fact, recent research has shown that schooling fish reduce their social network — the number of fish they pay attention to — when they need to quickly absorb new information such as the source of a perceived threat.

Caught by surprise

Despite finding the less-is-more rule in nature, we did not expect to find it in our study of robot swarms. We were testing hundreds of robots running a model based on observations of the collective behavior of honeybees selecting a nest site. This model allows the swarm to make decisions that take into account the value of the option based on how much good news it’s receiving, whether it’s indicators of a good nest-building site or positive restaurant reviews. This means the swarm not only considers the relative quality of the alternatives but also their absolute quality, meaning whether any of the alternatives are good enough.

This corresponds to what organisms — including humans — typically do. For example, when picking where to eat, if all open restaurants serve meals below your standard for quality, you won’t care that one restaurant is 5% better than the others; you won’t eat out today. But if a couple of restaurants are very good, picking either of the two will be satisfying even if there is a 5% difference in quality between them.

When we implemented this in the robot swarm, we expected that the more the individuals were socially connected, the better the swarm would adapt to environmental changes. This is what is predicted and observed in most models of networked individuals. But we found the opposite: The less connected the group was, the better our robot swarm responded to a change.

We then built a mathematical model that described the system and explained the observed phenomenon. Environmental changes are discovered by a small group. In a globally connected network, the small group faces an almost impossible task in trying to overturn the established opinion of the majority, even if the environmental changes present a better alternative. Instead, when individuals interact sporadically and in small numbers, an opinionated minority can easily gain traction and change the opinion of the entire group in cases where the group opinion isn’t as strongly held as the minority’s opinion.

Lessons for social media

The less-is-more effect doesn’t hold up in all cases. We observed this phenomenon in networks where the individuals follow simple rules and changing the opinion of others is not instantaneous but requires some time. Humans, in certain contexts, have simple reactive behavior that doesn’t involve much thinking, and may therefore be subject to similar dynamics.

People are globally connected through social media, which influences the spread of opinions in large populations. Understanding how opinions change — and don’t — is crucial for facing the challenges of the digital age.

Article by Andreagiovanni Reina, FNRS Research Fellow, Université Libre de Bruxelles (ULB)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘OK Boomer:’ how a TikTok meme traces the rise of Gen Z political consciousness

The phrase “OK Boomer” has become popular over the past two years as an all-purpose retort with which young people dismiss their elders for being “old-fashioned”.

“OK Boomer” began as a meme in TikTok videos, but our research shows the catchphrase has become much more. The simple two-word phrase is used to express personal politics and at the same time consolidate an awareness of intergenerational politics, in which Gen Z are coming to see themselves as a cohort with shared interests.

What does ‘OK Boomer’ mean?

The viral growth of the “OK Boomer” meme on social media can be traced to Gen Z musician @peterkuli’s remix OK Boomer, which he uploaded to TikTok in October 2019. The song was widely adopted in meme creations by his Gen Z peers, who call themselves “Zoomers” (the Gen Z cohort born in 1997-2012).

In the two-minute sound clip @peterkuli distilled an already-popular sentiment into a two-word phrase, accusing “Boomers” (those born during the 1946–64 postwar baby boom) of being condescending, being racist and supporting Donald Trump, who was then US president.

In essence, the “OK Boomer” meme emerged as a shorthand for Gen Z to push back against accusations of being a “fragile” generation unable to deal with hardship. But it has evolved into an all-purpose retort to older generations – but especially Boomers – when they dispense viewpoints perceived as presumptive, condescending or politically incorrect.

The meme arose in a wider context of “Boomer blaming”. In this view, the older generation has bequeathed Gen Z a host of societal issues, from Brexit and Trump to intergenerational economic inequality and climate change.

From ‘big P’ politics to ‘everyday politics’ and ‘intergenerational politics’

In our recent study on forms of online activism and advocacy on TikTok, we looked at 1,755 “OK Boomer” posts from 2019 and 2020 and found young people used the meme to engage in “everyday politics”.

Unlike “big P” politics – the work of governments, parliaments and politicians – “everyday politics” are political interests, pursuits and discussions framed through personal experiences.

On TikTok, young people construct and communicate their “everyday politics” by displaying their personal identities in highly personable ways, to demonstrate solidarity with or challenge beliefs and principles in society.

The “OK Boomer” meme and others like it allow young people to partake in a form of “intergenerational politics”. This is the tendency for people from a particular age cohort to form a shared political consciousness and behaviors, usually in opposition to the political attitudes of other groups. This is also reminiscent of when Boomers themselves encountered their own intergenerational politics in the countercultures of the 1960s and 1970s.

Doing ‘politics’ on TikTok

On TikTok, political expression can take the form of viral dances and audio memes. Young people use youthful parlance and lingo, pop cultural references and emojis to shape their collective political culture. In our study, we found three meme forms were especially popular:

  • “Lip-sync activism” involved using lip-syncing to overlay one’s facial expressions and gestures over a soundtrack, either in agreement with or to challenge the lyrics and moral tone of a song.

‘Lip-sync activism’: @mokke.cos lip-syncs to @mrbeard’s ‘OK Boomer’ sound clip.
‘Lip-sync activism’: @mokke.cos lip-syncs to @mrbeard’s ‘OK Boomer’ sound clip.
‘Lip-sync activism’: @mokke.cos lip-syncs to @mrbeard’s ‘OK Boomer’ sound clip.

  • “Reacts via duets” made use of TikTok’s “duet” function for users to record their own original video clip alongside an original. This a compare-and-contrast style allows for juxtaposition (to oppose the original statement) or collaboration (to add to the original statement).

‘React via duet’: @kyuutpie’s duet to @irishmanalways who had challenged Gen Z to not use technologies.‘React via duet’: @kyuutpie’s duet to @irishmanalways who had challenged Gen Z to not use technologies.
‘React via duet’: @kyuutpie’s duet to @irishmanalways who had challenged Gen Z to not use technologies.

  • “Craft activism” featured users displaying the creative processes and production of “OK Boomer”-themed objects and art, such as drawings, embroidery, and 3D printing.

‘Craft activism’: @peytoncoffee painting ‘OK Boomer’. The video received more than 5 million likes.
‘Craft activism’: @peytoncoffee painting ‘OK Boomer’. The video received more than 5 million likes.
‘Craft activism’: @peytoncoffee painting ‘OK Boomer’. The video received more than 5 million likes.

Conveying hardship and tensions through TikTok memes

Memes have been used as collective symbols for community identification around specific political causes such as human rights advocacy, the #MeToo movement, and anti-racism campaigns during the pandemic.

Similarly, the “OK Boomer” meme has been deployed to discuss various controversial and contentious issues. This is often done in a reflexive way, using self-deprecating memes and ironic self-criticism to parody the excessively judgmental behavior of others.

Around 40% of the posts we examined focused on young people’s lifestyles and well-being. These posts detailed how Gen Z are often criticized by Boomers for their lifestyle and appearance choices, such as unconventional career pathways and wearing ripped jeans.

‘Boomer shocked’: an #OkBoomer meme video from @ditshap.‘Boomer shocked’: an #OkBoomer meme video from @ditshap.
‘Boomer shocked’: an #OkBoomer meme video from @ditshap.

Gen Z TikTokers also expressed frustration towards the dismissive attitude that Boomers adopted towards their mental health. These posts suggest Boomers blame depression or anxiety on stereotypical causes such as “spending too much time on the phone” or “not drinking enough water”.

Unreasonable criticism from Boomers is a common them in videos such as this one from @themermaidscale.Unreasonable criticism from Boomers is a common them in videos such as this one from @themermaidscale.
Unreasonable criticism from Boomers is a common them in videos such as this one from @themermaidscale.

About 10% of our sample demonstrated issues around gender and sexuality norms. In these cases, Gen Z felt their identity explorations and expressions were criticized by Boomers. Non-binary young people and those who did not follow gender norms for dress describe being “dress-coded” by Boomers, and queer and transgender young people report receiving rebukes for being open about their sexuality.

Gender and sexuality are a common topic for #OkBoomer videos, such as this one from @timk.mua.Gender and sexuality are a common topic for #OkBoomer videos, such as this one from @timk.mua.
Gender and sexuality are a common topic for #OkBoomer videos, such as this one from @timk.mua.

Why do ‘OK Boomer’ memes matter?

Some scholars and commentators have criticized the “OK Boomer” meme as divisive and discriminatory against older people. However, as scholars of young people’s digital cultures we have found it more productive to understand the trend from the standpoint of Gen Z.

From this viewpoint, “OK Boomer” is a consequence of existing intergenerational discord, not its cause. Gen Z face growing threats such as climate change, political unrest, and generational economic hardship. Memes like “OK Boomer” are ways to express intergenerational everyday politics to consolidate a shared awareness of the perceived failure of the Boomers.

Further, most of the personal stories told through “OK Boomer” TikToks were deployed by Gen Z when they felt under attack for their lifestyle choices, dress code, expressions of sexuality, or mental health struggles. Like many Boomers did in their own youths, members of Gen Z value freedom of expression and identity exploration.

The retort of “OK Boomer” offers a counter-reaction and expresses indignation. But at the same time it carries a sense of desperation for agency and personal space, as well as some attention and care.The Conversation

The Conversation

This article by Crystal Abidin, Associate Professor & ARC DECRA Fellow, Internet Studies, Curtin University and Meg Jing Zeng, Senior research associate, University of Zurich, is republished from The Conversation under a Creative Commons license. Read the original article.

Kanye West chatbot gives stunning update on DONDA release date

Kanye West has many talents, but punctuality is not among them. He was late for registration; he was late for orchestration. He even wrote a song about his tardiness, the aptly titled “Late.”

The most irritating instances of Ye’s procrastination are undoubtedly the delays to his album releases.

Fans have become particularly infuriated by the endless postponements to the launch of his latest record, DONDA.

I asked the man himself for an update — and got some concerning answers:

Talk To Kanye is an AI bot.
#FreeKanye
Talk To Kanye is an AI bot.

Before you get alarmed about Yeezus’ welfare, I should let you in on a shocking secret: I wasn’t talking to the real Kanye.

These messages were, in fact, sent by an imposter: an AI chatbot called TalkToKanye.

The bot was created by Wesam Jawich, a software engineer. He told TNW that the idea emerged during conversations with a friend about the future of AI:

We realized there’s something really powerful about AI chatbots mimicking real people. Like, imagine wanting to learn about calculus and being able to ask Isaac Newton. Kanye’s new album is getting a lot of hype right now because it’s such a wild and funny rollout, so I just started hacking on it that weekend, and now we’re here.

Building the Yebot

Yeezus Christ! I hope an AI ASAP Rocky doesn’t see TalkToKanye’s messages, because the chatbot beef could get ugly.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Kanye West chatbot gives update on DONDA release date

Kanye West has many talents, but punctuality is not among them. He was late for registration; he was late for orchestration. He even wrote a song about his tardiness, the aptly titled “Late.”

The most irritating instances of Ye’s procrastination are undoubtedly the delays to his album releases.

Fans have become particularly infuriated by the endless postponements to the launch of his latest record, DONDA.

I asked the man himself for an update — and got some concerning answers:

Talk To Kanye is an AI bot.
#FreeKanye
Talk To Kanye is an AI bot.

Before you get alarmed about Yeezus’ welfare, I should let you in on a shocking secret: I wasn’t talking to the real Kanye.

These messages were, in fact, sent by an imposter: an AI chatbot called TalkToKanye.

The bot was created by Wesam Jawich, a software engineer. He told TNW that the idea emerged during conversations with a friend about the future of AI:

We realized there’s something really powerful about AI chatbots mimicking real people. Like, imagine wanting to learn about calculus and being able to ask Isaac Newton. Kanye’s new album is getting a lot of hype right now because it’s such a wild and funny rollout, so I just started hacking on it that weekend, and now we’re here.

Building the Yebot

Yeezus Christ! I hope an AI ASAP Rocky doesn’t see TalkToKanye’s messages, because the chatbot beef could get ugly.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

NFTs aren’t just a fluke — they will change the way we experience and own digital media

Just like DeFi’s money legos are about to revolutionize finance, media legos will deeply alter the social layers of the web. They will change how creators issue, distribute, and monetize their work while defining new rules for content exploration, collecting, and community building. This post dives into the paradigm shifts underpinning the internet renaissance and reveals opportunities for innovators.

layered artwork changing over time with 216 composable image combinations - coupled with a physical print.layered artwork changing over time with 216 composable image combinations - coupled with a physical print.
Source: https://async.art – layered artwork changing over time with 216 composable image combinations – coupled with a physical print.

Digital ownership and originality

NFTs are scarce, digital representations of a unique good or asset. BTC or ETH are also scarce but as monetary assets they have to be fungible – every unit is arbitrarily exchangeable for any other unit. NFTs are a horizontal piece of infrastructure that can be used to issue digital representations of art, music, collectibles, essays, books, virtual fashion or financial assets like invoices, warehouse receipts or real estate units.

They can represent exclusive ownership or access rights (think licenses) to the underlying piece of content, but they don’t have to. In most of the current early experiments PNG, MP3, XYZ file can still be copied and distributed over the internet without limitations. What makes the NFT unique is its provenance – it links one original piece of media to its creator for eternity. This link is cryptographically verifiable. Hence, NFTs are the first system that allows information to be both free and valuable at the same time by having a provable record of originality.

The more that a creation is experienced, circulated or remixed the more value it will accrue to the original. The actual work is a derivative of the value of its simulations.

Professor of Media and Cultural Studies, McKenzie Wark, said:

The future of collecting may be less in owning the thing that nobody else has, and more in owning the thing that everybody else has.

This is a paradigm shift; the inversion of the internet; a renaissance.

Chart showing how web2 Media and crypto Media will change different aspect of the art marketChart showing how web2 Media and crypto Media will change different aspect of the art market

Creator economy

Based on the concept of provenance, creators are empowered to issue media NFTs with baked in creator fees. Every time a media asset is interacted with or sold on secondary markets creators would benefit economically from their work’s rising popularity.

Creator fees can be programmed in various different ways borrowing from DeFi’s financial primitives. E.g. creator fees could vary based on the absolute volume of a secondary market transaction, the frequency of secondary market transactions or they could be priced on a bonding curve. Rents or licensing fees on a daily base might be explored – programmatic, continuous monetization. Either of these variations will be self enforced through the programmable media asset itself – no need for legacy rights enforcement in front of meat-space courts.

Beyond that, media assets can be used to express the provenance of status through a sovereign proof of belonging and participation within a community.

Open data architectures

Creators and collectors won’t be locked into any given marketplace or interface. Instead, experiences and interfaces will be built around the NFTs. The underlying data architecture is turned upside down. In web2 data and media assets have been created and siloed in closed architectures (bandcamp, unity) but in web3 games, virtual worlds and other interfaces new experiences will be created around the media asset itself. Based on this new logic we might see the rise of completely new media standards coming with strong network effects.

Programmatic capital formation over IP

The DAO experiment of 2016 or the ICO hype in 2017 have been early harbingers of how the future of capital formation over IP might look like. Crowd funded creative work can be re-invented by combining media assets with DeFi’s financial primitives. What started with $ESSAY on Mirror or generalized freelance markets on Erasure Bay might be used to fund complex creative work like music albums, books, or movies up front in return for an ownership stake in the media asset’s long term upside – funding over IP.

Based on this development we might see new types of record labels, art auction, or publishing houses with direct access to the underlying media asset (shout out to Sfermion). Crypto won’t just transform venture capital and asset management but also capital formation across the media landscape.

Stateful, interactive, and composable content to crate new experiences

Most of the innovation we’ve seen in the NFT space today has been rather collectible focused, not experience focused. Within this new paradigm we can create entirely new user experiences which haven’t been possible before.

Collage of new NFT digital artCollage of new NFT digital art

Top left: a piece issued on Async Art (portfolio) which displays two underprivileged orphan boys and their professional idol (pilot, firefighter, astronaut) based on their annual preferences. On each of the boy’s birthdays the door opens and a BTC address is revealed to collect donations for them. The BTC sent will be programatically distributed on their 18th birthday.

Top right: B20 bundles the Beeple 20 Collection – one of the most historic and valuable art project in the NFT space – along with the VR monuments it lives in. The ownership of this bundle has been fractionalized into B20 tokens. The bundle itself is immutable, unless bought out entirely, by anyone with the requisite DAI and B20 tokens.

Bottom left: Catalog powered by Zora enables musicians to issue scarce virtual vinyls which can be fractionalized and traded on secondary markets – letting the creators participate in the upside of every valuation change through programmatic fees.

Bottom right: Mirror’s $ESSAY experiment proved that creative work can be funded up front leaving the backers and the issuer with an ownership stake in the actual work result – or book or movie or xyz going forward.

A media asset can change its shapes, colors, sounds, text, or combinations thereof based on arbitrary data inputs. It can have various different states. A picture can change colors or shapes within it based on time-, weather-, or geo location data from a physical object or based on interactions with it’s owners. It can be interactive. Thinking through this concept further, the state of one media asset can define the state of another or multiple other media assets – they are composable and interoperable.

Authors and movie makers could create stories which change sceneries, protagonists, or story lines based on interactions with the respective audience – analogous to Netflix’ Black Mirror episode Bandersnatch. New multiplayer games could be launched taking into account arbitrary input data around celebrities, performance data of athletes etc.

Experiences can be bundled by linking the ownership of a media asset to other services in the virtual or physical world like concert tickets, ownership or access to other virtual goods, merchandise and more. The owner of a branded fashion or digital art asset might get access to its customized, physical 3D printed equivalent etc.

If media assets are derivatives of the value of their simulations than we can think of programmatic index portfolios of media assets (NFTX, Balancer) or bundled exposure to a piece of virtual land, a virtual museum and the art exhibition taking place within it (B20) for example. The design space for the financialization of media assets is gigantic.

Contextual media graphs to create neo- search and social layers

Once all content has gone on chain we will find new ways of organizing and contextualizing it. This category has barely been explored yet given that we are still in the early instalment phase following Carlotta Perez’ technological surge cycle.

By creating not just a graph of media assets but by contextualizing those assets, their meta data and relationships to one another we can create a semantic, machine readable web that understands context. To access and browse through it we need query and indexing tools (the graph) across different ledgers as well as a generalizable and customizable interface to browse through it (Anytype).

Atop of such neo OS we could build new search- and recommendation engines underpinned by the open economy’s principles of openness, collaboration, and participation. Open marketplaces for customizable and open source search algorithms, interfaces, or recommendation engines could thrive while putting an end to manipulative, black box data monopolies.

This article was originally published on SVRGN’s blog. You can read it here.

How ‘hearables’ could soon help us out at work and school while remaining virtually invisible

Hearables are wireless smart micro-computers with artificial intelligence that incorporate both speakers and microphones. They fit in the ears and can connect to the internet and to other devices, and are designed to be worn daily. Some technology companies are now marketing these as “the future of hearing enhancement,” and focusing on their capacities to disrupt existing hearing aid markets.

But hearables aren’t hearing aids, ear plugs, headphones or headsets, although they could acquire the benefits of these devices. This means that one could rely on hearables as a kind of always-worn personal assistant nested in the ear, whether used for whispering scheduling reminders, playing music, amplifying sound or talking with friends.

But with AI, the hearable can also be used to determine the physiological condition of the user, along with their present knowledge or skill level in any content they’re accessing: for instance, when learning a new language.

As an expert in educational technology, I believe hearables have potential for education. Educational technology entrepreneurs are now discussing how this new reliance on aural technologies can result in greater incorporation of voice into learning, a shift from relying primarily on texts for transferring knowledge.

What hearables aren’t

To accompany traditional forms of classroom education or online education, hearables can support the delivery of lectures, educational podcasts, notifications and reminders through a wide variety of applications while supporting interactivity.

With hearables, instant replay and recording of words are also possible, so students could check their understanding of a lesson. Intelligent hearables could even determine the context and choose the right time and place to deliver the best content.

Another important feature for education is the ability to translate between languages.

In music education and language teaching, hearables are poised to play a significant role as listening is at the centre of both music and language comprehension. Music and language students could access relevant content from anywhere and practise their lessons. In addition, the biometric capabilities of the devices allow for measuring health and fitness variables, and so can be useful in health education.

Learners could use hearables almost anywhere with internet connectivity to communicate with teachers and other students. For example, while commuting, students could collaborate on projects and access content with text-to-speech technologies and talk directly with their teacher for advice.

Beyond using hearables in formal and informal educational and learning settings, these devices can also be well-integrated into the normal life and activities and used for more than learning — for instance, using voice commands to control home devices.

A close-up of a person's ear including a listening deviceA close-up of a person's ear including a listening device
Smart technology can be applied in classroom settings to enrich the learning experience. Image via Wikimedia Commons

Professional training, independent learning

Hearables could be used by workers in manufacturing facilities or other professional settings. They could empower users to search for and access instructions while they are hands-on with their tools, without the distraction of a screen.

Outside formal education or the workplace, hearables can also help learners take control of their own learning. A rise in popularity of educational lectures as podcasts may have helped open the door to relying on audio in new ways and using different kinds of audio devices.

Language learning for native speakers could also be improved with hearables, as these devices could be used for improving skills of public speaking, presenting, interviewing or working in teams. With AI, hearables are also well-placed to support adaptive personal learning tailored to individual learner’s personal characteristics and situation. Hearables could become one of the principal ways learners of any age interact in learning.

Challenges, limitations

There are, however, significant challenges in using hearable devices. The most important to date are technical limitations. The need to reduce power usage and battery size, increase battery life and for more reliable connectivity, remain significant obstacles to be addressed by manufacturers.

Battery longevity and high bandwidth connections are essential to support natural language communications, particularly in language translation. In the immediate future, this may only be available in larger cities where 5G capability is now being set up. This fifth generation of wireless technology allows for a major reduction in energy consumption, which is needed to support extended battery life and high-speed internet. Fortunately, it is now incorporated in the latest operating systems and could soon become ubiquitous.

There are also major concerns related to social acceptability, in addition to privacy, when it comes to people talking out loud in a public space or office.

The comfort of these new lightweight devices could help to destigmatize their use, related to perceptions of hearing aids. Some hearable companies are focusing on stylishness while others promote them as “wearable tech for your ears.”

Perhaps hearables could be the earrings of the future?

Rise of ubiquitous devices?

Smart mobile devices are now ubiquitous among students. This cannot be said for hearable devices yet, and it could take some time before they achieve a similar level of ubiquity, if ever.

On the other hand, lessons designed for hearable device use can be easily accessed by students on their mobile devices or other computers.

Hearables will soon be here to stay both in wider society and the educational community. As trend forecaster and marketer Piers Fawkes has commented: “Maybe instead of people staring at their screens, they are going to be staring off into the distance. What’s it called? The thousand-yard stare.”The Conversation

The Conversation

This article by Rory McGreal, Professor and UNESCO/ICDE Chair in Open Educational Resources, Athabasca University, is republished from The Conversation under a Creative Commons license. Read the original article.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Virgin Hyperloop wants to get you excited about riding its ultra-fast pods — but there’s a long way to go

Hyperloop is dubbed the first new mode of transport in over 100 years, and if all things go to plan, companies like Virgin Hyperloop will propel people through the air in pods moving through vacuum tubes.

This week Virgin Hyperloop released a new explainer video on Twitter. While the company sounds excited to get you onboard its ultra-fast trains, you’re in for a long wait before you can strap in for your first ride. Here’s why.

According to Virgin Hyperloop, it starts with a “near-vacuum environment inside a tube” that facilitates high speeds and low power consumption by reducing aerodynamic drag.

People and goods will travel in pods that use “proprietary magnetic levitation and propulsion” to lift and guide pods on a track, and allow several pods to depart per minute across different routes.

The company asserts:
After building and testing the world’s first hyperloop system, we are now focused on our commercial product. The key to our product is guided by a design that is elegant through its simplicity, future-proof due to its modularity, and guided by principles of this century, not the last.

Future pods will seat up to 28 passengers at speeds of over 1000 km/h — 10 times faster than traditional rail. By comparison, a maglev bullet train launched in China in July only reached 600km/hr. In terms of routes, their focus is Dubai, India (between Mumbai and Pune), and in the US, North Carolina, and Texas. The company claims to have carried out over 500 tests in Las Vegas.

hyperloop routing in action
Hyperloop pods will to depart at quick intervals across different routes
hyperloop routing in action

First Virgin Hyperloop passenger test in 2020

Virgin Hyperloop is the only company to test a hyperloop with actual passengers. In November 2020, the company successfully launched a two-seat prototype ridden by two members of staff. The hyperloop traveled 500 meters, reaching 172 kilometers per hour within 6.25 seconds. Ok, the technology works, albeit on a short track and slower than promised. But it’s 6 months since then, and we’ve not seen any longer or faster journeys by Virgin Hyperloop or any of their competitors. What is the real progress?

Hyperloop eligible for US federal funding

This month the US Senate passed the $1.2 trillion bipartisan Infrastructure Investment and Jobs Act. The legislation allows hyperloop companies to compete for federal funding for US-based projects.

This includes hyperloop eligibility for the Consolidated Rail Infrastructure and Safety Improvements and the Advanced Technology Vehicle Manufacturing (ATVM) loan program at the Department of Energy. As a result hyperloop companies will be able to get funding for safety testing and loans for vehicle manufacturing.

So, where’s my pod ride?

As a technology, the hyperloop is still strictly in the R&D phase. Most of the efforts by Virgin’s competitors are strictly in stealth mode.

The video is, in reality, just a CGI rendering of what could be rather than an update of what exists. A sign of hyperloop tech struggling to stay relevant as people focus on autonomous vehicles and flying taxis?

There’s still a considerable way to go in terms of questions of economic viability, legal regulation, scale, interoperability, and whether people are actually willing to travel in almost silent, windowless pods. We’ve been given the plans for an impressive disruption to long-distance travel. But we’re waiting to see proof of long-term viability compared to maglev trains and other high speed rail.

Virgin Hyperloop has set 2027 for the release of the first commercial offering. Whether the world will be ready is the question.

Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up?

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

How biometric data collection can put people in conflict areas at risk

In 2007, the United States military began using a small, handheld device to collect and match the iris, fingerprint and facial scans of over 1.5 million Afghans against a database of biometric data. The device, known as Handheld Interagency Identity Detection Equipment (HIIDE), was initially developed by the U.S. government as a means to locate insurgents and other wanted individuals. Over time, for the sake of efficiency, the system came to include the data of Afghans assisting the U.S. during the war.

Today, HIIDE provides access to a database of biometric and biographic data, including of those who aided coalition forces. Military equipment and devices — including the collected data — are speculated to have been captured by the Taliban, who have taken over Afghanistan.

This development is the latest in many incidents that exemplify why governments and international organizations cannot yet securely collect and use biometric data in conflict zones and in their crisis responses.

Building biometric databases

Biometric data, or simply biometrics, are unique physical or behavioral characteristics that can be used to identify a person. These include facial features, voice patterns, fingerprints or iris features. Often described as the most secure method of verifying an individual’s identity, biometric data are being used by governments and organizations to verify and grant citizens and clients access to personal information, finances and accounts.

According to a 2007 presentation by the U.S. Army’s Biometrics Task Force, HIIDE collected and matched fingerprints, iris images, facial photos and biographical contextual data of persons of interest against an internal database.

In a May 2021 report, anthropologist Nina Toft Djanegara illustrates how the collection and use of biometrics by the U.S. military in Iraq set the precedent for similar efforts in Afghanistan. There, the “U.S. Army Commander’s Guide to Biometrics in Afghanistan” advised officials to “be creative and persistent in their efforts to enroll as many Afghans as possible.” The guide recognized that people may hesitate to provide their personal information and therefore, officials should “frame biometric enrolment as a matter of ‘protecting their people.’”

Inspired by the U.S. biometrics system, the Afghan government began work to establish a national ID card, collecting biometric data from university students, soldiers and passport and driver license applications.

Although it remains uncertain at this time whether the Taliban has captured HIIDE and if it can access the aforementioned biometric information of individuals, the risk to those whose data is stored on the system is high. In 2016 and 2017, the Taliban stopped passenger buses across the country to conduct biometric checks of all passengers to determine whether there were government officials on the bus. These stops sometimes resulted in hostage situations and executions carried out by the Taliban.

Placing people at increased risk

We are familiar with biometric technology through mobile features like Apple’s Touch ID or Samsung’s fingerprint scanner, or by engaging with facial recognition systems while passing through international borders. For many people located in conflict zones or rely on humanitarian aid in the Middle East, Asia and Africa, biometrics are presented as a secure measure for accessing resources and services to fulfil their most basic needs.

In 2002, the United Nations High Commissioner for Refugees (UNHCR) introduced iris-recognition technology during the repatriation of more than 1.5 million Afghan refugees from Pakistan. The technology was used to identify individuals who sought funds “more than once.” If the algorithm matched a new entry to a pre-existing iris record, the claimant was refused aid.

The UNHCR was so confident in the use of biometrics that it altogether decided not to allow disputes from refugees. From March to October 2002, 396,000 false claimants were turned away from receiving aid. However, as communications scholar Mirca Madianou argues, iris recognition has an error rate of two to three per cent, suggesting that roughly 11,800 claimants out of the alleged false claimants were wrongly denied aid.

Additionally, since 2018, the UNHCR has collected biometric data from Rohingya refugees. However, reports recently emerged that the UNHCR shared this data with the government of Bangladesh, who subsequently shared it with the Myanmar government to identify individuals for possible repatriation (all without the Rohingya’s consent). The Rohingya, like the Afghan refugees, were instructed to register their biometrics to receive and access aid in conflict areas.

The UNHCR collects the biometric data of refugees in Uganda.

In 2007, as the U.S. government was introducing HIIDE in Afghanistan, U.S. Marine Corps were walling off Fallujah in Iraq to supposedly deny insurgents freedom of movement. To get into Fallujah, individuals would require a badge, obtained by exchanging their biometric data. After the U.S. retreated from Iraq in 2020, the database remained in place, including all the biometric data of those who worked on bases.

Protecting privacy over time

Registering in a biometric database means trusting not just the current organization requesting the data but any future organization that may come into power or have access to the data. Additionally, the collection and use of biometric data in conflict zones and crisis response present heightened risks for already vulnerable groups.

While collecting biometric data is useful in specific contexts, this must be done carefully. Ensuring the security and privacy of those who could be most at risk and those who are likely to be compromised or made vulnerable is critical. If security and privacy cannot be ensured, then biometric data collection and use should not be deployed in conflict zones and crisis response.The Conversation

The Conversation

This article by Lucia Nalbandian, Researcher, Canada Excellence Research Chair in Migration and Integration, Ryerson University, is republished from The Conversation under a Creative Commons license. Read the original article.

Panos Panay’s promotion is great news for Microsoft Surface fans

Panos Panay is moving on up at Microsoft. The man most associated with the Surface device line was just promoted to executive vice president and added to the company’s senior leadership team, according to Bloomberg. The means he’ll be one of the primary advisers to CEO Satya Nadella.

Panay, who has been with Microsoft since 2004, is generally seen as the ‘father’ of the Surface product family. His passionate product presentations have made him the most well-known of Microsoft employee besides Nadella and Xbox’s Phil Spencer. But until recently, he hasn’t held that much official influnce over the company as a whole — the fact that he wasn’t on the senior leadership team may even come as a surprise to the casual tech enthusiast.

As Microsoft has tightened the connection between its Surface and Windows divisions — joining them under one umbrella in 2020 — Panos has had an increasingly important role in the company’s plans. By now, Panos has become the face of Windows as well as Surface, considering he was the one who announced Windows 11 and that he keeps teasing us with new features.

Panos Panay holding two Surface devicesPanos Panay holding two Surface devices

The fact that Panos will now be reporting directly to Nadella is a big deal — Microsoft hasn’t had someone from Windows on the senior leadership team since 2018′, let alone Surface. At the time the company was all about the cloud. But as the Surface line grows into an increasingly important business — and Microsoft reconsiders the Windows’ future with Windows 11 — the company has had a renewed focus on its consumer-facing products.

As a Surface fan, I’m hoping that Panos’ greater influence is a sign the company has big things planned for its hardware. Unlike Apple, whose hardware has long been central to its business model, Microsoft has often seemed to treat Surface as an experimental branch compared to its various software products.

The company has made some of my favorite PCs throughout the years, but it’s also often felt like it’s been holding back, with few hardware refreshes and specs that were sometimes well behind the competition.. A few years ago, there were even rumors that Microsoft might give up on Surface altogether, joining Zune and Lumia in the company’s hardware graveyard.

But things are changing, and its Surface business is actually making decent money these days. Considering many of the company’s PCs — including the Surface Pro, Laptop, and Book — are due for a design refresh, that’s pretty impressive. Perhaps the lack of fresh designs is because of better things were yet to come; I imagine the company is cooking up some major new hardware to coincide with the launch of Windows 11 later this year.

Panay brings a passionate energy to his product announcements — the closest equivalent Microsoft has had to Steve Jobs. If he brings that same energy to Microsoft’s leadership, there could be some good years ahead for Surface fans.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Robinhood’s gamified investing makes massive losses feel unreal

This article was originally published on The Conversation in March 2021, ahead of Robinhood’s initial public offering. The company went public in July 2021. The original article, which mentions pre-IPO Robinhood, is reproduced below:

[https://www.ft.com/content/81e9871b-5d12-480b-87a8-454b69e11958]

Wall Street has longbeen likenedto a casino. Robinhood, an investment app that just filed plans for an initial public offering, makes the comparison more apt than ever.

That’s because the power of the casino is the way it makes people feel like gambling their money away is a game. Casinos are full of mood lighting, fun noises and other sensory details that reward gamblers when they place coins in slots.

Similarly, Robinhood’s slick and easy-to-use app resembles a thrill-inducing video game rather than a sober investment tool. The color palette of red and green is associated with mood, with green having a calming effect and red increasing arousal, anger and negative emotions. Picking stocks can seem like a fun lottery of scratching off the winning ticket; celebratory confetti drops from the top of the screen for the new users’ first three investments.

But just as people can lose a lot of money gambling at the casino, the same thing can happen when you trade stocks and bonds – sometimes with disastrous consequences, such as last year when a Robinhood user died by suicide after mistakenly believing that he’d lost US$750,000.

I study how people behave inside game worlds and design classroom games. Using gamelike features to influence real-life actions can be beneficial, such as when a health app uses rewards and rankings to encourage people to move more or eat healthier food. But there’s a dark side too, and so-called gamification can lead people to forget the real-world consequences of their decisions.

Games explained

Generally speaking, games – whether played on a board, among children or with a computer – are voluntary activities that are structured by rules and involve players competing to overcome challenges that carry no risk outside of their virtual world.

The reason games are so captivating is that they challenge the mind to learn new things and are generally safe spaces to face and overcome failure.

Games also mimic rites of passage similar to religious rituals and draw players into highly focused “flow states” that dramatically alter self-awareness. This sensory blend of flow and mastery are what make games fun and sometimes addicting: “Just one more turn” thinking can last for hours, and players forget to eat and sleep. Players who barely remember yesterday’s breakfast recall visceral details from games played decades ago.

Unlike static board games, video games specifically provide visual and auditory feedback, rewarding players with color, movement and sound to maintain engagement.

The power of angrier birds

The psychological impact of game play can also be harnessed for profit.

For example, many free-to-play video games such as Angry Birds 2 and Fortnite give players the option to spend real money on in-game items such as new and even angrier birds or character skins. While most people avoid spending much money, this results in a small share of heavy users spending thousands of dollars in an otherwise free game.

This “free-to-play” model is so profitable that it’s grown increasingly popular with video game designers and publishers.

Similarly, subscription-based “massively multiplayer online roleplaying games” such as Final Fantasy XIVuse core game play loops. These are the primary set of actions a player will carry out during a game – such as jumping in Super Mario Brothers or continuously upgrading weapons in the Borderlands series – that encourage constant play to keep users playing and paying. They’re so effective that, for a small number of people, playing the game can even become an addiction that interferes with their mental well-being.

Gamification, however, goes one step further and uses gaming elements to influence real-world behavior.

 

Gamification is the use of gamelike elements in other contexts. Common elements include badges, points, rankings and progress bars that visually encourage players to achieve goals.

Many readers likely have experienced this type of gamification to improve personal fitness, get better grades, build savings accounts and even solve major scientific problems. Some initiatives also include offering rewards that can be cashed in for participating in actual civic projects, such as volunteering in a park, commenting on a piece of legislation or visiting a government website.

They all rely on the behavioral concept known as extrinsic motivation, which occurs when a person pursues goals with the expectation of a reward, such as a student who hates calculus but desperately needs an A to graduate. Extrinsic motivation lasts only as long as the player feels appropriately challenged and rewarded. Games exploit this by tapping into the pleasure of earning rewards.

Gamification for bad

There’s a fine line, though, between using extrinsic motivation to help people lose some weight and using it to obscure the complexity of investing in stocks and other financial instruments behind a fun, game-like environment.

Robinhood built its app to delight people who are new to active investing, taking advantage of the same psychological motivators that drive game behavior. Robinhood’s simple interface is replete with emojis, push notifications, digital confetti and backslapping affirmation emails. Its “game play loop” is making stock trading easy while providing sensory feedback.

I opened an account to see for myself.

The gamelike thrills start at sign-up when Robinhood offers new users a free stock, which they select from three face-down golden cards. This gives a casino-like illusion of choice, with the color gold lending an air of sophistication.

But rather than merely pick a card, users actually “scratch” it, like a lottery ticket, after which the stock is revealed with affirming congratulations and a screen full of confetti. Other sensory appeals such as colors and gamified imagery such as gift boxes encourage continued use.

Digitalized gift box is opened revealing cash, next to the words, 'Invite more friends. Get free stock.'
The gift imagery in the Robinhood app taps into the extrinsic promise of a reward for both the sender and receiver. Robinhood app
Digitalized gift box is opened revealing cash, next to the words, 'Invite more friends. Get free stock.'

By delighting users, Robinhood creates players rather than investors. This helps them overlook the fact that speculative investing is very difficult and could cause them to lose lots of money – even if they’re professionals who spend hours and days scrutinizing companies and trades.

Robinhood isn’t the only financial app that uses some of these gamelike effects. But unlike Robinhood, apps like Acorns and the Long Game encourage users to save money rather than spend it.

Games make learning fun

In my own work studying player interaction and decision-making in games, I’ve largely found them to be positive psychological tools.

And there are lots of real-world applications of game play, such as for improving health, furthering education and saving money. But I believe simply encouraging people with little investing experience to buy and sell stocks is not one of them.

As Robinhood prepares to go public, it could use the opportunity to rethink how it interacts with users. Rather than celebrating a trade, for example, it could reward them for taking an investment education program.

As any good game maker knows, the best games not only are big on fun and socializing but emphasize learning too.

Article by James “Pigeon” Fielder, Adjunct Professor of Political Science, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The dos and don’ts of machine learning research — read it, nerds

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your ticket now!

Machine learning is becoming an important tool in many industries and fields of science. But ML research and product development present several challenges that, if not addressed, can steer your project in the wrong direction.

In a paper recently published on the arXiv preprint server, Michael Lones, Associate Professor in the School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, provides a list of dos and don’ts for machine learning research.

The paper, which Lones describes as “lessons that were learnt whilst doing ML research in academia, and whilst supervising students doing ML research,” covers the challenges of different stages of the machine learning research lifecycle. Although aimed at academic researchers, the paper’s guidelines are also useful for developers who are creating machine learning models for real-world applications.

Here are my takeaways from the paper, though I recommend anyone involved in machine learning research and development to read it in full.

Pay extra attention to data

Machine learning models live and thrive on data. Accordingly, across the paper, Lones reiterates the importance of paying extra attention to data across all stages of the machine learning lifecycle. You must be careful of how you gather and prepare your data and how you use it to train and test your machine learning models.

No amount of computation power and advanced technology can help you if your data doesn’t come from a reliable source and hasn’t been gathered in a reliable manner. And you should also use your own due diligence to check the provenance and quality of your data. “Do not assume that, because a data set has been used by a number of papers, it is of good quality,” Lones writes.

Your dataset might have various problems that can lead to your model learning the wrong thing.

For example, if you’re working on a classification problem and your dataset contains too many examples of one class and too few of another, then the trained machine learning model might end up learning to predict every input as belonging to the stronger class. In this case, your dataset suffers from “class imbalance.”

While class imbalance can be spotted quickly with data exploration practices, finding other problems needs extra care and experience. For example, if all the pictures in your dataset were taken in daylight, then your machine learning model will perform poorly on dark photos. A more subtle example is the equipment used to capture the data. For instance, if you’ve taken all your training photos with the same camera, your model might end up learning to detect the unique visual footprint of your camera and will perform poorly on images taken with other equipment. Machine learning datasets can have all kinds of such biases.

DataData

The quantity of data is also an important issue. Make sure your data is available in enough abundance. “If the signal is strong, then you can get away with less data; if it’s weak, then you need more data,” Lones writes.

In some fields, the lack of data can be compensated for with techniques such as cross-validation and data augmentation. But in general, you should know that the more complex your machine learning model, the more training data you’ll need. For example, a few hundred training examples might be enough to train a simple regression model with a few parameters. But if you want to develop a deep neural network with millions of parameters, you’ll need much more training data.

Another important point Lones makes in the paper is the need to have a strong separation between training and test data. Machine learning engineers usually put aside part of their data to test the trained model. But sometimes, the test data leaks into the training process, which can lead to machine learning models that don’t generalize to data gathered from the real world.

“Don’t allow test data to leak into the training process,” he warns. “The best thing you can do to prevent these issues is to partition off a subset of your data right at the start of your project, and only use this independent test set once to measure the generality of a single model at the end of the project.”

In more complicated scenarios, you’ll need a “validation set,” a second test set that puts the machine learning model into a final evaluation process. For example, if you’re doing cross-validation or ensemble learning, the original test might not provide a precise evaluation of your models. In this case, a validation set can be useful.

“If you have enough data, it’s better to keep some aside and only use it once to provide an unbiased estimate of the final selected model instance,” Lones writes.

Know your models (as well as those of others)

ensemble learningensemble learning

Today, deep learning is all the rage. But not every problem needs deep learning. In fact, not every problem even needs machine learning. Sometimes, simple pattern-matching and rules will perform on par with the most complex machine learning models at a fraction of the data and computation costs.

But when it comes to problems that are specific to machine learning models, you should always have a roster of candidate algorithms to evaluate. “Generally speaking, there’s no such thing as a single best ML model,” Lones writes. “In fact, there’s a proof of this, in the form of the No Free Lunch theorem, which shows that no ML approach is any better than any other when considered over every possible problem.”

The first thing you should check is whether your model matches your problem type. For example, based on whether your intended output is categorical or continuous, you’ll need to choose the right machine learning algorithm along with the right structure. Data types (e.g., tabular data, images, unstructured text, etc.) can also be a defining factor in the class of model you use.

One important point Lones makes in his paper is the need to avoid excessive complexity. For example, if you’re problem can be solved with a simple decision tree or regression model, there’s no point in using deep learning.

Lones also warns against trying to reinvent the wheel. With machine learning being one of the hottest areas of research, there’s always a solid chance that someone else has solved a problem that is similar to yours. In such cases, the wise thing to do would be to examine their work. This can save you a lot of time because other researchers have already faced and solved challenges that you will likely meet down the road.

“To ignore previous studies is to potentially miss out on valuable information,” Lones writes.

Examining papers and work by other researchers might also provide you with machine learning models that you can use and repurpose for your own problem. In fact, machine learning researchers often use each other’s models to save time and computational resources and start with a baseline trusted by the ML community.

“It’s important to avoid ‘not invented here syndrome’, i.e., only using models that have been invented at your own institution, since this may cause you to omit the best model for a particular problem,” Lones warns.

Know the final goal and its requirements

electronic brain with magnifying glasselectronic brain with magnifying glass

Having a solid idea of what your machine learning model will be used for can greatly impact its development. If you’re doing machine learning purely for academic purposes and to push the boundaries of science, then there might be no limits to the type of data or machine learning algorithms you can use. But not all academic work will remain confined in research labs.

“[For] many academic studies, the eventual goal is to produce an ML model that can be deployed in a real world situation. If this is the case, then it’s worth thinking early on about how it is going to be deployed,” Lones writes.

For example, if your model will be used in an application that runs on user devices and not on large server clusters, then you can’t use large neural networks that require large amounts of memory and storage space. You must design machine learning models that can work in resource-constrained environments.

Another problem you might face is the need for explainability. In some domains, such as finance and healthcare, application developers are legally required to provide explanations of algorithmic decisions in case a user demands it. In such cases, using a black-box model might be impossible. For example, even though a deep neural network might give you a performance advantage, its lack of interpretability might make it useless. Instead, a more transparent model such as a decision tree might be a better choice even if it results in a performance hit. Alternatively, if deep learning is an absolute requirement for your application, then you’ll need to investigate techniques that can provide reliable interpretations of activations in the neural network.

As a machine learning engineer, you might not have precise knowledge of the requirements of your model. Therefore, it is important to talk to domain experts because they can help to steer you in the right direction and determine whether you’re solving a relevant problem or not.

“Failing to consider the opinion of domain experts can lead to projects which don’t solve useful problems, or which solve useful problems in inappropriate ways,” Lones writes.

For example, if you create a neural network that flags fraudulent banking transactions with very high accuracy but provides no explanation of its decision, then financial institutions won’t be able to use it.

Know what to measure and report

machine learning data chartsmachine learning data charts

There are various ways to measure the performance of machine learning models, but not all of them are relevant to the problem you’re solving.

For example, many ML engineers use the “accuracy test” to rate their models. The accuracy test measures the percent of correct predictions the model makes. This number can be misleading in some cases.

For example, consider a dataset of x-ray scans used to train a machine learning model for cancer detection. Your data is imbalanced, with 90 percent of the training examples flagged as benign and a very small number classified as malign. If your trained model scores 90 on the accuracy test, it might have just learned to label everything as benign. If used in a real-world application, this model can lead to missed cases with disastrous outcomes. In such a case, the ML team must use tests that are insensitive to class imbalance or use a confusion matrix to check other metrics. More recent techniques can provide a detailed measure of a model’s performance in various areas.

Based on the application, the ML developers might also want to measure several metrics. To return to the cancer detection example, in such a model, it might be important to reduce false negatives as much as possible even if it comes at the cost of lower accuracy or a slight increase in false positives. It is better to send a few people healthy people for diagnosis to the hospital than to miss critical cancer patients.

In his paper, Lones warns that when comparing several machine learning models for a problem, don’t assume that bigger numbers do not necessarily mean better models. For example, performance differences might be due to your model being trained and tested on different partitions of your dataset or on entirely different datasets.

“To really be sure of a fair comparison between two approaches, you should freshly implement all the models you’re comparing, optimise each one to the same degree, carry out multiple evaluations… and then use statistical tests… to determine whether the differences in performance are significant,” Lones writes.

Lones also warns not to overestimate the capabilities of your models in your reports. “A common mistake is to make general statements that are not supported by the data used to train and evaluate models,” he writes.

Therefore, any report of your model’s performance must also include the kind of data it was trained and tested on. Validating your model on multiple datasets can provide a more realistic picture of its capabilities, but you should still be wary of the kind of data errors we discussed earlier.

Transparency can also contribute greatly to other ML research. If you fully describe the architecture of your models as well as the training and validation process, other researchers that read your findings can use them in future work or even help point out potential flaws in your methodology.

Finally, aim for reproducibility. if you publish your source code and model implementations, you can provide the machine learning community with great tools in future work.

Applied machine learning

federated learningfederated learning

Interestingly, almost everything Lones wrote in his paper is also applicable to applied machine learning, the branch of ML that is concerned with integrating models into real products. However, I would like to add a few points that go beyond academic research and are important in real-world applications.

When it comes to data, machine learning engineers must consider an extra set of considerations before integrating them into products. Some include data privacy and security, user consent, and regulatory constraints. Many a company has fallen into trouble for mining user data without their consent.

Another important matter that ML engineers often forget in applied settings is model decay. Unlike academic research, machine learning models used in real-world applications must be retrained and updated regularly. As everyday data changes, machine learning models “decay” and their performance deteriorates. For example, as life habits changed in wake of the covid lockdown, ML systems that had been trained on old data started to fail and needed retraining. Likewise, language models need to be constantly updated as new trends appear and our speaking and writing habits change. These changes require the ML product team to devise a strategy for continued collection of fresh data and periodical retraining of their models.

Finally, integration challenges will be an important part of every applied machine learning project. How will your machine learning system interact with other applications currently running in your organization? Is your data infrastructure ready to be plugged into the machine learning pipeline? Does your cloud or server infrastructure support the deployment and scaling of your model? These kinds of questions can make or break the deployment of an ML product.

For example, recently, AI research lab OpenAIlaunched a test version of their Codex API model for public appraisal. But their launch failed because their servers couldn’t scale to the user demand.

Hopefully, this brief post will help you better assess your machine learning project and avoid mistakes. Read Lones’s full paper, titled, “How to avoid machine learning pitfalls: a guide for academic researchers,” for more details about common mistakes in the ML research and development process.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Facebook’s ‘Project Aria’ wearable looks like lame old Snap-style glasses

It’s been almost a year since Facebook first unveiled its ambitious AR vision called project Aria. While it was mostly about conceptual ideas and providing a “sensor platform” to developers, we never got to hear more about it — until now.

A new product manual, first spotted by Protocol, describes what this prototype device might look like and what it’s meant to do. The hardware is codenamed Gemini EVT (engineering valuation test), and it doesn’t have an in-built display. So don’t expect Google Glass-like AR tricks from it.

The device has four cameras and three buttons for power, capture, and mute. It also has a series of LEDs to let passersby know that the device is active and recording. The mute switch also turns to privacy mode, which supposedly stops recording sound. The manual doesn’t have any details on the cameras’ specifications.

Physical buttons on Facebook's prototype project Aria hardware
Credit: Fcc.io
Physical buttons on Facebook’s prototype project Aria hardware
Physical buttons on Facebook's prototype project Aria hardware

The Gemini EVT charges through a USB cable with a proprietary port on the other end.

The manual also notes that you can set up the device through an iOS app called Ariane. This will also tell you about the device’s charging status and the data it’s collected.

It’s important to keep in mind that this is a prototype device, and might not be ever available for purchase — even if Facebook is testing a bunch of these. But never say never: we know that the social networking giant is making smart glasses in partnership with Ray-Ban, sans AR. Those could eventually arrive in the market at some point. Keep an eye out.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Instagram will finally show content when you search for stuff — like it should

If you search for a keyword on any social media or content serving website, you expect to get suggestions for related posts, images, or videos. However, on Instagram, all you get are accounts, hashtags, and places as search results. Annoying.

Thankfully, the company is working on changing this approach. Last night, the firm’s head, Adam Mosseri, said that it’s working on a “full search result page” that will show photos and videos as results, along with tabs for accounts and hashtags. Here’s what the new search will look like:

Instagram is working on a new version of search that will show content results
Instagram is working on a new version of search that will show content results
Instagram is working on a new version of search that will show content results

The inner workings of this feature sound obvious in this day and age, but it might be a bit tricky. It’s easy for an algorithm to find posts for the keywords “space,” if the posts have the search query in their captions and hashtags. But the algorithm will also need to find posts that capture space, but don’t have the matching metadata.

Instagram said for the initial phase, it’ll concentrate on getting keyword search for the English language right, and then expand it to other languages.

As Mosseri explained in his video on Instagram, the company currently looks at aspects like text in the search phrase, your activity, and likes on the platform, as well as the popularity of posts to provide you good search results. However, for this new version of search, the company might need to look at other factors such as when the post was uploaded.

Instagram’s chief rival, TikTok, already presents search results in a similar format. And given how the Bytedance’s-owned app’s algorithms are regarded for their ability to surface relevant, compelling content in search results, Mosseri & Co. have a lot of catching up to do.

You can read more about Instagram’s announcement here.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

The Boox Note Air is the Android-powered Kindle alternative I didn’t know I needed

I read several thousand pages of books every year, but I’m not the type of person who feels attached to physical books. I’ve been fully immersed in Amazon’s Kindle ecosystem for years now, sometimes reading on my phone, sometimes on my laptop, and sometimes listening through Audible. Getting an e-reader that wasn’t a Kindle never really occurred to me.

And then I tried the Boox Note Air. It does almost everything a Kindle Oasis does, but it also supports a surprisingly useful stylus and has a giant 10.3-inch screen. At $480, it’s pricey, but the price feels totally fair given the screen size and features on offer (and the company just announced a smaller, $350 7.8-inch model). I kind of love it.

The best way to summarize the Boox Note Air is that it’s really just an Android tablet with an e-ink screen and a book-centric UI. That means it can do almost everything an Android tablet can, as long you don’t need smooth framerates and, you know color (although Boox also has a colorful e-reader I’m currently testing too).

Boox Note Air
Credit: Boox
Boox Note Air

Want to use a full-fledged browser? Sure. Want to check out Instagram photos in black and white? Why not. Want to take handwritten notes for a class? Go for it. You can even watch videos at laughably slow frame rates.

It’s a degree of freedom I’m not used to from a device with this display technology.

The Note Air has its own book store, but I’m not gonna lie: I didn’t use it. The first thing I did when I turned on the tablet was open up the custom app store (you can install Google Play, but it’s a little annoying) and download the Kindle app.

It works pretty much exactly like the Kindle app for phones, and after tweaking a few display settings (you can optimize the screen for refresh rates or image quality, I prefer the latter for reading), the overall experience was remarkably similar to using a regular kindle.

That is, except for the fact the screen is 10-inches, which is wonderful. Although most of my time is spent reading fiction, for which a smaller screen suffices, I also often read textbooks on my Surface. I could see myself loading textbooks and PDFs onto the Note Air as the extra real estate makes such form factors much more useful.

The note-taking experience with the included Notes app is surprisingly great. While there’s a little bit more latency than, say, a Surface Book 3, the experience balances out because the Note Air looks and feels so much more like paper.

The included stylus isn’t tilt-sensitive, but it does a great job of recognizing different pressures and accurately following my pen strokes. It really does feel surprisingly close to writing on paper.

There are just a few caveats to the device’s flexibility. First, installing the Google Play Store involves a clumsy and slow workaround, as the device is not Google Play certified.

Boox Note AirBoox Note Air

This is not as big of a deal as I expected. Most people aren’t going to install too many apps on a device like this, and every app I did actually want to install was available from the Boox’s App Store. Alternatively, you could install them from a site like APK Mirror.

Second, the device is running Android 10 — already outdated — and it’s not clear how long OS updates will be supported. While it’s unlikely to be a real issue, it is technically possible that future updates to the Kindle app or others won’t support this version of Android correctly.

Third: the inking experience is lackluster outside of Boox’s own Notes app. In this app, the stylus is remarkably fast; combined with the more paper-like experience, it’s just as satisfying as the inking experience on a device like my Surface book.

But I’d hoped I could run OneNote on the device and use it to follow up on all the notes I’d taken on my surface. Unfortunately, the pen input is too laggy in other apps.

Still, I found these caveats to be minor setbacks. this talk of customization is just icing on the cake. 90 percent of the time, I just used the device for reading. Most of the rest was spent writing notes, and I only occasionally dabbled into other apps. It’s still an ereader and note-taking device first and foremost; it’s just nice to have the option to do more with it.

Other miscellaneous notes:

  • The device doesn’t have an automatic brightness sensor, but it does support both ‘warm’ and ‘cool’ lighting like the Kindle Oasis.
  • Most of the interface will be very familiar to Android users. There’s even a drop-down menu. But the home screen is very much catered to reading and writing.
  • As with most e-ink devices, the battery lasts forever for reading. Other activities and using full brightness will drain it more quickly, but it’s longevity you’ll measure in weeks, not days.
  • It’s quite a classy-looking device. I really dig the anodized blue.
  • It isn’t waterproof, so don’t take it in the shower.
  • It has a speaker!
  • It charges via USB-C (the kindle is still stuck on micro USB, sighs).
  • Boox just launched a 7.8-inch version with much the same features, for those who’d like a smaller screen closer to a typical paperback book

Let me be clear: Not everyone interested in an e-reader needs a giant screen, stlyus compatibility, and the ability to install your own apps. And some might prefer the idea of a device that can only really do one thing. If that’s the case, the Boox Note Air might not be the right choice.

Boox Note AirBoox Note Air

But the great thing about the Boox Note Air is that it adds these features with virtually no compromises to the primary reading experience. Once you install your book software of choice, you almost never have to interact with the OS, and you still get the massive battery life and paper-like readability you’d expect from e-reader.

If you’ve been interested in e-readers like the Kindle, but like the idea of a larger screen and the ability to take notes, the Boox Note was made for you.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

So you bought an NFT? Doesn’t mean you also own it

NFT’s or non-fungible tokens first captured the public imagination when a digital collage by an artist named Beeple sold for US$69 million (£51 million) at Christie’s in March 2021. Since then, there has been an explosion in the use of these units for storing digital content, which are bought and sold using online ledgers known a blockchains.

Since that initial connection with art, we are seeing NFT’s being used in numerous other ways. Notably, many are being traded as collectibles on exchanges like OpenSea and Rarible. Lately, for example, a series of 8,888 adorable “Pudgy Penguins” made a splash, each reflecting its own unique characteristic, with one selling for a record 150 ethereum (about US$500,000).

Pudgy Penguins for sale on OpenSeaPudgy Penguins for sale on OpenSea
Pssst, fancy a penguin? OpenSea

Yet whether it is a remarkable piece of digital artwork or a cute digital penguin, NFTs are essentially tradeable jpegs or gifs. Unlike physical collectibles, a NFT owner will not be able to display the asset in their home – except on a screen. They might think they could display it on a website, but this isn’t necessarily the case. So what is someone actually getting when they buy an NFT, and what do they truly own from a legal perspective?

The new frontier

To understand NFTs, it is important to understand what is meant by “fungible”. Fungible is derived from the Latin verb fungi, meaning to perform. In the broader context, this means interchangeable and relates to whether something can be exchanged.

Money is fungible, in the sense that you can buy a commodity worth £10 with any £10 note; it doesn’t matter which one you use. On the other hand, NFTs cannot be exchanged like for like with another. They are each one of a kind or one of a limited edition.

Content sold as NFTs can be created in many ways. It can be computer-generated, which was the basis for the production of 10,000 unique CryptoPunks in 2017.

It can reflect a collaborative work, such as the English singer-songwriter Imogen Heap’s series of music NFTs, “Firsts”. These involved her improvising alongside visuals provided by artist Andy Carne. Or NFTs can represent a single work, such as Beeple’s artwork; or a series of items, such as the Kings of Leon’s “NFT Yourself” series in which the assets on offer included music albums with unique features and special concert tickets.

Limited rights

NFTs allow the owner of a limited work or collection to reach their audience directly. Whereas previously it was not possible to sell something like the first ever tweet, or a taco-themed gif, or indeed a piece of art online, now individuals, companies, or cultural organizations can do so as long as they are the rightful owner.

The creator can do this because, according to UK copyright law, copyright arises automatically when a work is created – as long as it reflects the “author’s own intellectual creation”. This means that the creator of a work is the owner of the copyright, and can do what they want with it.

When someone buys an NFT from the creator, they obtain ownership in the sense that it becomes their property. After all, an NFT is a digital certificate of ownership representing the purchase of a digital asset, traceable on the blockchain.

But the NFT holder does not have any other rights to the work. This includes those offered under copyright law, such as the right of communication to the public (in other words, making the asset available to the world at large), or the rights of adaptation or reproduction.

The situation is the same if you buy a physical collectible. Owning a painting does not automatically give you the right to display it in public. It also doesn’t give you the right to sue for infringement of copyright if someone reproduces the image in the painting without permission. To obtain such rights, you either need to be the copyright owner of the work or have the copyright assigned to you by the creator (in writing and signed).

The trouble with online content is that, by virtue of its digital nature, it is easy to share, copy and reproduce. Buyers of NFTs need to understand that they would be infringing the copyright if they engage in such activities without the permission of the right holder. The only way such rights can be transferred is through the terms embedded in the NFT, in the form of a license.

There have been some NFTs where the buyer has been granted the right to use the copyright in a limited way. For example, owners of CryptoKitties NFTs have been allowed to make up to US$100,000 in gross revenues from them each year. In other cases, creators have specifically restricted all commercial use of the work. For example, the Kings of Leon stipulated that their NFT music was for personal consumption only.

A picture of a CryptoKittyA picture of a CryptoKitty
CryptoKitties allow owners to make up to US$100,000 a year from them. Vector Factory
 

Buyers, therefore, need to be clear that the main reasons to buy an NFT are the speculative investment and the pleasure of having something unique from an admired artist, brand, sports team, or whatever. Unless the terms allow it, buyers will only have a limited ability to share the creative work on public platforms or to reproduce it and make it available for others.

Incidentally, buyers should also be aware that the blockchain cannot absolutely know whether a creative work is authentic. Someone can take another person’s work and tokenize it as an NFT, thereby infringing the rights of the copyright owner. You need to be sure that you are buying something that originated from the creator.

In short, NFTs are probably here to stay, but they clearly raise ownership questions relating to copyright law. This may not be immediately clear to most people, and it’s important that you understand the limits of what you are getting for your money.The Conversation

The Conversation

Article by Dinusha Mendis, Professor of Intellectual Property and Innovation Law and Acting Deputy Dean (Research), Faculty of Media and Communication, Bournemouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Inside Boston Dynamics’ project to create humanoid robots

Boston Dynamics is known for the flashy videos of its robots doing impressive feats. Among Boston Dynamics’ creations is Atlas, a humanoid robot that has become popular for showing unrivaled ability in jumping over obstacles, doing backflips, and dancing. The videos of Boston Dynamics robots usually go viral, accumulating millions of views on YouTube and generating discussions on social media.

And the robotics company’s latest video, which shows Atlas successfully running a parkour track, is no exception. Within hours of its release, it received hundreds of thousands of views and became one of the top-ten trends of U.S. Twitter.

But the more interesting video was an unprecedented behind-the-scenes account of how Boston Dynamics’ engineers developed and trained Atlas to run the parkour track. The video shows some of Atlas’s failures and is a break from the company’s tradition of showing highly polished results of this work. The video and an accompanying blog post provide some very important insights into the challenges of creating humanoid robots.

Research vs commercial robots

Officially, Boston Dynamics is a for-profit organization. The company wants to commercialize its technology and sell products. But at its heart, Boston Dynamics is a research lab filled with engineers and scientists who also want to push the limits of science regardless of the commercial benefits. Aligning these two goals is very difficult, and testament to the fact is that Boston Dynamics has changed ownership several times in the past decade, going from Google to SoftBank to Hyundai.

The company is looking to create a successful business model, and it has already released a few commercial robots, including Spot, a multi-purpose robo-dog, and Stretch, a mobile robo-arm that can move boxes. Both have found interesting applications in different industries, and with Hyundai’s manufacturing capacity, Boston Dynamics might be able to turn them into profitable ventures.

Atlas, on the other hand, is not one among Boston Dynamics’ commercial projects. The company describes it as a “research platform.”

This is not because humanoid biped robots are not commercially useful. We humans have designed our homes, cities, factories, offices, and objects to accommodate our physique. A biped robot that could walk surfaces and handle objects as we do can have unlimited utility and be one of the—if not the—most lucrative business opportunities for the robotics industry. It would have great advantage over current mobile robots, which are restricted to specific environments (flat grounds, uniform lighting, flat-sided objects, etc.) or require their environments to be changed to accommodate their limits.

However, biped robots are also really hard to create. Even Atlas, which is by far the most advanced biped robot, is still a long way from reaching the smooth and versatile motor skills of humans. And a look at some of the failures in the new Atlas video shows the gap that remains to be filled.

Atlas Falls DownAtlas Falls Down

The challenges of biped robots

In animals and humans, growth and learning happens together. You learn to crawl, stand, walk, run, jump, and do sports as your body and brain develop.

But growing robots is impossible (at least for the foreseeable future). Robotics engineers start with a fully developed robot (which is iteratively adjusted as they experiment with it) and must teach it all the skills it needs to use its body efficiently. In robotics, as with many other fields of science, engineers seek ways to avoid replicating nature in full detail by taking shortcuts, creating models, and optimizing for goals.

In the case of Atlas, the engineers and scientists of Boston Dynamics believe that optimizing the robot for parkour performance will help them achieve all the nuances of bipedal motor skills (and create sensational videos that get millions of views on YouTube).

As Boston Dynamics put it in its blog post: “A robot’s ability to complete a backflip may never prove useful in a commercial setting… But it doesn’t take a great deal of imagination or sector-specific knowledge to see why it would be helpful for Atlas to be able to perform the same range of movements and physical tasks as humans. If robots can eventually respond to their environments with the same level of dexterity as the average adult human, the range of potential applications will be practically limitless.”

So, the basic premise is that if you can get a robot to do backflips, jump across platforms, vault over barriers, and run on very narrow paths, you would have taught it all the other basic movement and physical skills that all humans possess.

The blog further states: “Parkour, as narrow and specific as it may seem, gives the Atlas team a perfect sandbox to experiment with new behaviors. It’s a whole-body activity that requires Atlas to maintain its balance in different situations and seamlessly switch between one behavior and another.”

The evolution of Atlas has been nothing short of impressive. Aside from the flashy moves, it is showing some very interesting fundamental capabilities, such as adjusting its balance when it makes an awkward landing. According to Boston Dynamics, the engineers have also managed to generalize Atlas’s behavior by providing it with a set of template behaviors such as jumping and vaulting, and letting it adjust those behaviors to new scenarios it encounters.

But the robot still struggles with some very basic skills seen in all primates. For example, in some cases, Atlas falls flat on its face when it misses a jump or loses its balance. In such cases, primates instinctively stretch out their arms to soften the blow of the fall and protect their head, neck, eyes, and other vital parts. We learn these skills long before we start running on narrow ledges or jumping on platforms.

A complex environment such as the parkour track helps discover and fix these challenges much faster than a flat and simple environment would.

AtlasAtlas

Simulation vs real-world training

One of the key challenges of robotics is physical-world experience. The Atlas video displays this very well. A team of engineers must regularly repair Atlas after it gets damaged. This cycle drives up costs and slows down training.

Training robots in the physical also has a scale problem. The AI systems that guide the movements of robots such as Atlas require a huge amount of training, orders of magnitude more than a human would need. Taking Atlas through the parkour track thousands of times doesn’t scale and would take years of training and incur huge costs in repairs and adjustments. Of course, the research team could slash training time by using multiple prototypes in parallel on separate tracks. But this would significantly increase the costs and would need huge investments in gear and real estate.

Engineers at Boston Dynamics must regularly repair Atlas
Engineers at Boston Dynamics must regularly repair Atlas
Engineers at Boston Dynamics must regularly repair Atlas

An alternative to real-world training is simulated learning. Software engineers create three-dimensional environments in which a virtual version of the robot can undergo training at a very fast pace and without the costs of the physical world. Simulated training has become a key component of robotics and self-driving cars, and there are several virtual environments for the training of embodied AI.

But virtual worlds are just an approximation of the real world. They always miss small details that can have a significant impact, and they don’t obviate the need for training robots in the physical world.

The physical world exposes some of the challenges that are very hard to simulate in a virtual environment, such as slipping off an unstable ledge or the tip of the foot getting stuck in a crevice.

The Atlas video shows several such cases. One notable example takes place when Atlas reaches a barrier and its arm to vault over it. This is a simple routine that doesn’t require great physical strength. But although Atlas manages the feat, its arm shakes awkwardly.

“If you or I were to vault over a barrier, we would take advantage of certain properties of our bodies that would not translate to the robot,” according to Scott Kuindersma, Atlas Team Lead. “For example, the robot has no spine or shoulder blades, so it doesn’t have the same range of motion that you or I do. The robot also has a heavy torso and comparatively weak arm joints.”

These kinds of details are hard to simulate and need real-world testing.

Perception in robots

According to Boston Dynamics, Atlas uses “perception” to navigate the world. The company’s website states that Atlas uses “depth sensors to generate point clouds of the environment and detect its surroundings.” This is similar to the technology used in self-driving cars to detect roads, objects, and people in their surroundings.

Atlas uses depth sensors to map its surroundings
Atlas uses depth sensors to map its surroundings
Atlas uses depth sensors to map its surroundings

This is another shortcut that the AI community has been taking. Human vision doesn’t rely on depth sensors. We use stereo vision, parallax motion, intuitive physics, and feedback from all our sensory systems to create a mental map of the environment. Our perception of the world is not perfect and can be duped, but it’s good enough to make us excellent navigators of the physical world most of the time.

It will be interesting to see if the vision and depth sensors alone will be enough to bring Atlas on par with human navigation or if Boston Dynamics will develop a more complicated sensory system for its flagship robot.

Atlas still has a long way to go. For one thing, the robot will need hands if it is to handle objects, and that is itself a very challenging task. Atlas probably won’t be a commercial product anytime soon, but it is providing Boston Dynamics and the robotics industry a great platform to learn about the challenges that nature has solved.

“I find it hard to imagine a world 20 years from now where there aren’t capable mobile robots that move with grace, reliability, and work alongside humans to enrich our lives,” Boston Dynamics’ Kuindersma said. “But we’re still in the early days of creating that future. I hope that demonstrations like this provide a small glimpse of what’s possible.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Next-gen wheelchairs are modular and shapeshifting

Wheelchairs provide valuable independence to their owners. Designs vary according to the terrain and user needs. Both of these can change over time. However, their price makes it difficult to afford more than one chair.  In response, designers are taking cues from bikes and robotics to make wheelchairs that adapt to the varied user needs.

The UNAwheel Maxi device is a wheelchair add-on. It hooks onto the front of an existing chair and is compatible with basic and active wheelchairs without adding extra weight. It comes with button steering to accelerate, decelerate, and turn.

The steering section is made of a combination of metal (hydroforming/cutting technology) and plastic (injection molding), the handles are made of rubber, and the main body of plastic (RIM). It is easy to operate and can be easily and quickly attached to and detached from the wheelchair. 

The UNAwheel Maxi in action
The UNAwheel Maxi bring added functionality to conventional wheelchairs
The UNAwheel Maxi in action

The device comes with a rechargeable battery and can run up to 30 kilometers on a charge. It can accelerate up to 20 kilometers an hour. It’s a great option for changes in terrain and uphill commutes. 

 It’s perfect for people who can’t maneuver wheelchairs, or have to deal with terrain and uphill commutes. When you don’t need it, the UNA wheel simply detaches from the wheelchair, turning it into a hand-operatedvehicle once again.

A gamechanger for flying with a wheelchair

Swimmer Patrick Flanagan recently had his wheelchair damaged during transit to the Toyko Paralympics. It’s a common problem. 

Startup Revolve Air has a solution. The company is making the first standard wheelchair certifiable as carry-on luggage by airlines. It folds up to a third of its size and removes the need to check in a wheelchair hours before a flight and wait for it in the luggage carousel at your arrival. The company plans to launch a pilot in 2022. They believe that users could easily hire the chairs via an app in the future.

A new kind of wheels come to micromobility

It’s not only designers improving wheelchairs. Folks with disabilities are limited if they want to hire an adapted bike or mobility scooter. But not anymore. In July Bird teamed up with Scootaround to pilot a first-of-its-kind accessible mobility program. Folks with disabilities can choose from three accessible vehicle types using the Bird app.

Long term, this could set a powerful precedent for inclusion in mobility, especially for temporary users, and be a gamechanger for wheelchair users. 

This site perfectly encapsulates the horrors of today’s internet

Surfing the web isn’t what it used to be.

The halcyon era of peaceful browsing on clean sites is now a distant memory. Today’s internet is a digital hellscape of pop-up ads, notification prompts, and paywall blocks.

Navigating these barriers can be more trouble than it’s worth. At times, it’s easier to just hit the back button.

A new parody website encapsulates this harrowing experience. Named “How I Experience Web Today”, the site obscures the content you want behind an interminable stream of crap.

The agony mercifully ends once you leave the page  — but not before a final “Leave site?” pops up. It’s the poisonous cherry on top of the nauseating cake.

How I Experience The Web shows what modern sites have become.
Please, just let me go.
How I Experience The Web shows what modern sites have become.

The site’s creator, a developer called Guangyi Li, has made a valiant attempt to capture the horrors of contemporary browsing.

The reality, however, can be even worse. At least the parody has got no CAPTCHA images or (*shudder*) autoplay videos.

Developers are often unfairly blamed for these infuriating intrusions, but they’re not the real culprits.

The features are primarily ways for businesses and creators to monetize free content. Unfortunately, the quest for cash can create some horrendous user experiences.

I recognize the irony in making these complaints on a site with ads of its own. But TNW is a mere juvenile delinquent compared to the war criminals of web design.

You may disagree, but take solace in the knowledge that even we perpetrators are victims of the transgressions.

Jolly good! UK launches its first wireless EV charging trial

Residents of Nottingham are going to witness a UK first: a wireless charging electric vehicle trial. 

Nine electric taxis will roam the streets of the city, named after the trial itself, WiCET (Wireless Charging of Electric Taxis).

WiCET is a demonstration project funded by the UK Office for Zero Emission Vehicles (OZEV), as one of Innovate UK’s portfolio of on-street wireless charging (OSWC) projects. 

The etaxi fleet will consist of five plug-in hybrid LEVC TX taxis and four all-electric Nissan Dynamo taxis, all bearing the message “This electric taxi will charge wirelessly.”

WiCET electric taxi wireless charging
Credit: WiCET
The electric taxis are easily recognizable by their blue-green livery.
WiCET electric taxi wireless charging

To put it simply, the electric taxis will wirelessly top-up their battery by being parked on the induction pads especially built on the taxi rank street.

WiTEC electric taxi UK
Credit: WiTEC
How wireless charging will work.
WiTEC electric taxi UK

The WiCET taxis will be available for hail by the general public and they will collect vehicle data during the rides, including the journey distances and battery level.

According to the project partners, wireless charging will enable better service availability for passengers and allow more time collecting fares for taxi drivers. A special billing system will also be developed to ensure that drivers are correctly charged for the electricity used. 

The trial will run for six months and is due to complete by March 2022. Its outcomes will be used to prove the technical and commercial case for the wireless charging of black cabs in medium and large cities.

The project is being funded by Innovate UK with some £3.4 million. 


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up? 

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Eureka! Brain implant creates feelings in the fingertips

In a first-in-human study, a minimally-invasive brain implant has elicited a sense of touch in the fingertips of two people.

Researchers have previously electrically stimulated folds of the brain, called gyri, to restore some generalized sensation to the hand. The new technique targets the harder-to-reach grooves, known as sulci, to evoke feelings in the fingertips.

Study co-author Chad Bouton, a professor at The Feinstein Institutes for Medical Research, said the approach could help people with paralysis and neuropathy:

From buttoning our shirts to holding a loved one’s hand, our sense of touch may be taken for granted until we lose it. These results show the ability to generate that sensation, even after it is lost, which may lead us to a clinical option in the future.

Brain stimulation

The research uses a technique called stereoelectroencephalography (SEEG). This is a minimally invasive surgical procedure that involves placing electrodes in targeted areas of the brain.

The researchers implanted SEEG electrodes in the sulci of two volunteers with intractable epilepsy. The participants were already undergoing pre-operative seizure monitoring for surgical treatment of their condition. Per the study paper:

The decisions regarding whether to implant, the electrode targets, and the duration for implantation were based entirely on clinical grounds without reference to this investigation. Based on these clinical indications, all electrodes were implanted in the right hemisphere for both participants.

When the electrodes were activated, the participants said they felt “tingling” or the “sensation of electricity” localized to the hand and fingertips.

The researchers found that stimulating the sulci evoked these feelings more often than stimulating the gyri.  

Santosh Chandrasekaran, PhD and Stephan Bickel, MD, PhD, co-lead authors on the paper hold up a SEEG electrode.
Credit: The Feinstein Institutes for Medical Research
Study co-author Stephan Bickel (right) holds an SEEG electrode.
Santosh Chandrasekaran, PhD and Stephan Bickel, MD, PhD, co-lead authors on the paper hold up a SEEG electrode.

Their study marks another milestone moment for brain-computer interfaces. In recent years, researchers have also restored people’s sense of touch by connecting a robotic arm to the brain and by sending neural signals to a haptic system. The new technique could provide a less invasive method of stimulating precise areas of the body.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Employees hate their commute, not the office

Since the pandemic, working from home has become the new norm, but there’ll be a pinch of sadness when we eventually go back to the office. 

On the upside we’ll see our colleagues again and we’ll stop wearing pj’s all day long, but on the downside we’ll lose a significant benefit: the lack of commuting.

A survey by Hubble showed that 79% of respondents consider the lack of commute as one of the biggest advantages regarding working from home. 

Interestingly, the office rental company identified that there’s a direct correlation between employees’ commute time and their enjoyment of working from home.

It comes as no surprise that the farther the employees live from the office, the more they appreciate working remotely.

To further illustrate, the graph below shows how the distance from the office affects the experience of working from home.

The lack of commuting is the biggest advantage of working from home.
Credit: Hubble
The correlation between commute time and remote work enjoyment.
The lack of commuting is the biggest advantage of working from home.

Of those surveyed who live over two hours from the office, 84.2% said they’d had a positive experience working remotely, compared to only 56.4% who live less than 15 minutes away. 

Similarly, the negative experience of working remotely is minimal for the employees who reside more than an hour away from work, while it reaches 15.4% for those who live under 15 minutes away.

With this trend in mind, it’s only expected that the staff who need to travel the most prefer to work from home more often, with nearly half of them preferring doing it on a daily basis.

The lack of commuting is a big advantage of working from home.
Credit: Hubble
The link between commute time and frequency of working remotely.
The lack of commuting is a big advantage of working from home.

What’s most striking though is that the majority of all categories would rather work remotely at least twice per week.

If, however, there was a workspace closer to the employees’ residence, a significant number of those living between 15 and 120 minutes from the office would like to use a coworking space at least once a week — even those who live 2+ hours from the office would like to use one “occasionally.”

The lack of commuting is a big advantage of working from home.
Credit: Hubble
How often would employees use an alternative workspace based on their commute time from the main office.
The lack of commuting is a big advantage of working from home.

So what is it that makes us dislike the commute?

Apart from the Covid concern, the survey revealed that employees appreciate their gains in personal time, money, and freedom.

Notably, 55% of respondents said that “financial savings” were among the top three things about working from home. To give you an idea, a recent study by Totaljobs found that the lack of commuting could save Londoners up to £14,000 over the course of their careers. 

Personal benefits aside, the lack of commuting can also do wonders for the environment considering that transport accounted for 24% of global greenhouse emissions in 2020.

And while ditching the office altogether can have a negative psychological impact and affect work-life separation, perhaps a hybrid model or workspaces that can be reached on foot or by bike could combine the best of both worlds.


Do EVs excite your electrons? Do ebikes get your wheels spinning? Do self-driving cars get you all charged up? 

Then you need the weekly SHIFT newsletter in your life. Click here to sign up.

Google Maps will soon introduce tolls and… price your ride

This week Google shared what appears to be its next major feature with members of the Google Maps preview program. If the news from a member of the preview program is correct, they are adding the automatic display of prices for tolls on roads and bridges to the driver navigation route. Google Maps invited the member to take a survey based on their UX of these features.

It suggests that toll prices and perhaps total costs would be displayed along a driving route before the user selects it, allowing them to choose between faster and cheaper but longer routes.

Plan for car-free travel

It’s part of a bigger trend to incorporate more advanced transport planning into Google Maps. Last week ​​Spin, the micromobility unit of Ford Motor Company, announced the integration of their escooters and ebikes into Google Maps, making it easier to get around without a car.

Escooters and bikes can be found on Google Maps
Google Maps details the location and distance to escooters and ebikes
Escooters and bikes can be found on Google Maps

Riders can use Maps to see, in real-time, the nearest available Spin ebike or escooter. The map details how long it will take to walk to the ebike or scooter, the estimated battery range, and when you can expect to arrive. Upon arrival, you can use the Spin app to pay for the vehicle, unlock it and ride. Easy.

Pay for your parking or train ticket in Google Maps

Earlier this year, Google Maps announced a collaboration with Google Pay so users can pay for street parking and transit fares right from Google Maps.

Google Maps lets you pay for parking in the app
Pay for a parking meter in Google Maps
Google Maps lets you pay for parking in the app

COVID-19 increased the desire for hands-free transactions. You can pay your meter right from driving navigation in Maps and avoid touching the meter altogether.

Public transport passengers can plan their trip on Google Maps and buy a ticket without switching to a transport app. Available for 80 transit agencies worldwide, when you get transit directions, you see the option to pay with your phone with the credit or debit cards already linked to your Google Pay account.

Giving people options at the route planning stage makes it easy to travel and increases the incentive to opt for sustainable transport. Google is not only directing us but becoming part of the solution for greener transport.

 

3 ways ‘algorithmic management’ makes work more stressful and less satisfying

If you think your manager treats you unfairly, the thought might have crossed your mind that replacing said boss with an unbiased machine that rewards performance based on objective data is a path to workplace happiness.

But as appealing as that may sound, you’d be wrong. Our review of 45 studies on machines as managers shows we hate being slaves to algorithms (perhaps even more than we hate being slaves to annoying people).

Algorithmic management — in which decisions about assigning tasks to workers are automated — is most often associated with the gig economy.

Platforms such as Uber were built on technology that used real-time data collection and surveillance, rating systems, and “nudges” to manage workers. Amazon has been another enthusiastic adopter, using software and surveillance to direct human workers in its massive warehouses.

As algorithms become ever more sophisticated, we’re seeing them in more workplaces, taking over tasks once the province of human bosses.

To get a better sense of what this will mean for the quality of people’s work and well-being, we analyzed published research studies from across the world that have investigated the impact of algorithmic management on work.

We identified six management functions that algorithms are currently able to perform: monitoring, goal setting, performance management, scheduling, compensation, and job termination. We then looked at how these affected workers, drawing on decades of psychological research showing what aspects of work are important to people.

Just four of the 45 studies showed mixed effects on work (some positive and some negative). The rest highlighted consistently negative effects on workers. In this article we’re going to look at three main impacts:

  • Less task variety and skill use
  • Reduced job autonomy
  • Greater uncertainty and insecurity

1. Reduced task variety and skill use

A great example of the way algorithmic management can reduce task variety and skill use is demonstrated by a 2017 study on the use of electronic monitoring to pay British nurses providing home care to elderly and disabled people.

The system under which the nurses worked was meant to improve their efficiency. They had to use an app to “tag” their care activities. They were paid only for the tasks that could be tagged. Nothing else was recognized. The result was they focused on the urgent and technical care tasks — such as changing bandages or giving medication — and gave up spending time talking to their patients. This reduced both the quality of care as well as the nurses’ sense of doing significant and worthwhile work.

Research suggests increasing use of algorithms to monitor and manage workers will reduce task variety and skill. Call centers, for example, already use technology to assess a customers’ mood and instruct the call center worker on exactly how to respond, from what emotions they should deeply to how fast they should speak.

2. Reduced job autonomy

Gig workers refer to as the “fallacy of autonomy” that arises from the apparent ability to choose when and how long they work. When the reality is that platform algorithms use things like acceptance rates to calculate performance scores and to determine future assignments.

This loss of general autonomy is underlined by a 2019 study that interviewed 30 gig workers using the “piecework” platforms Amazon Mechanical Turk, MobileWorks, and CloudFactory. In theory, workers could choose how long they worked. In practice, they felt they needed to constantly be on call to secure the best paying tasks.

This isn’t just the experience of gig workers. A detailed 2013 study of the US truck driving industry showed the downside of algorithms dictating what routes drivers should take, and when they should stop, based on weather and traffic conditions. As one driver in the study put it: “A computer does not know when we are tired, fatigued, or anything else […] I am also a professional and I do not need a [computer] telling me when to stop driving.”

3. Increased intensity and insecurity

Algorithmic management can heighten work intensity in a number of ways. It can dictate the pace directly, as with Amazon’s use of timers for “pickers” in its fulfillment centers.

But perhaps more pernicious is its ability to ramp up the work pressure indirectly. Workers who don’t really understand how an algorithm makes its decisions feel more uncertain and insecure about their performance. They worry about every aspect of affecting how the machine rates and ranks them.

For example, in a 2020 study of the experience of 25 food couriers in Edinburgh, the riders spoke about feeling anxious and being “on edge” to accept and complete jobs lest their performance statistics be affected. This led them to take risks such as riding through red lights or through busy traffic in heavy rain. They felt pressure to take all assignments and complete them as quickly as possible so as to be assigned more jobs.

Avoiding a tsunami of unhealthy work

The overwhelming extent to which studies show negative psychological outcomes from algorithmic management suggests we face a tsunami of unhealthy work as the use of such technology accelerates.

Currently, the design and use of algorithmic management systems are driven by “efficiency” for the employer. A more considered approach is needed to ensure these systems can coexist with dignified, meaningful work.

Transparency and accountability are key to ensuring workers (and their representatives) understand what is being monitored, and why, and that they can appeal those decisions to a higher, human, power.The Conversation

The Conversation

Article by Sharon Kaye Parker, Australian Research Council Laureate Fellow, Curtin University and Xavier Parent-Rocheleau, Professor, HEC Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The feds are investigating Tesla’s autopilot AGAIN — here’s why

It’s hard to miss the flashing lights of fire engines, ambulances, and police cars ahead of you as you’re driving down the road. But in at least 11 cases in the past three and a half years, Tesla’s Autopilot advanced driver-assistance system did just that. This led to 11 accidents in which Teslas crashed into emergency vehicles or other vehicles at those scenes, resulting in 17 injuries, and one death.

The National Highway Transportation Safety Administration has launched an investigation into Tesla’s Autopilot system in response to the crashes. The incidents took place between January 2018 and July 2021 in Arizona, California, Connecticut, Florida, Indiana, Massachusetts, Michigan, North Carolina, and Texas. The probe covers 765,000 Tesla cars – that’s virtually every car the company has made in the last seven years. It’s also not the first time the federal government has investigated Tesla’s Autopilot.

As a researcher who studies autonomous vehicles, I believe the investigation will put pressure on Tesla to reevaluate the technologies the company uses in Autopilot and could influence the future of driver-assistance systems and autonomous vehicles.

How Tesla’s Autopilot works

Tesla’s Autopilot uses cameras, radar, and ultrasonic sensors to support two major features: Traffic-Aware Cruise Control and Autosteer.

Traffic-Aware Cruise Control, also known as adaptive cruise control, maintains a safe distance between the car and other vehicles that are driving ahead of it. This technology primarily uses cameras in conjunction with artificial intelligence algorithms to detect surrounding objects such as vehicles, pedestrians and cyclists, and estimate their distances. Autosteer uses cameras to detect clearly marked lines on the road to keep the vehicle within its lane.

In addition to its Autopilot capabilities, Tesla has been offering what it calls “full self-driving” features that include auto park and auto lane change. Since its first offering of the Autopilot system and other self-driving features, Tesla has consistently warned users that these technologies require active driver supervision and that these features do not make the vehicle autonomous.

Screenshot of a display with the left third showing an icon of a car on a road and the right to third showing a mapScreenshot of a display with the left third showing an icon of a car on a road and the right to third showing a map
Tesla’s Autopilot display shows the driver where the car thinks it is in relation to the road and other vehicles. Rosenfeld Media/Flickr

Tesla is beefing up the AI technology that underpins Autopilot. The company announced on Aug. 19, 2021, that it is building a supercomputer using custom chips. The supercomputer will help train Tesla’s AI system to recognize objects seen in video feeds collected by cameras in the company’s cars.

Autopilot does not equal autonomous

Advanced driver-assistance systems have been supported on a wide range of vehicles for many decades. The Society of Automobile Engineers divides the degree of a vehicle’s automation into six levels, starting from Level 0, with no automated driving features, to Level 5, which represents full autonomous driving with no need for human intervention.

Within these six levels of autonomy, there is a clear and vivid divide between Level 2 and Level 3. In principle, at Levels 0, 1, and 2, the vehicle should be primarily controlled by a human driver, with some assistance from driver-assistance systems. At Levels 3, 4, and 5, the vehicle’s AI components and related driver-assistance technologies are the primary controllers of the vehicle. For example, Waymo’s self-driving taxis, which operate in the Phoenix area, are Level 4, which means they operate without human drivers but only under certain weather and traffic conditions.

News coverage of a Tesla driving in Autopilot mode that crashed into the back of a stationary police car.

Tesla Autopilot is considered a Level 2 system, and hence the primary controller of the vehicle should be a human driver. This provides a partial explanation for the incidents cited by the federal investigation. Though Tesla says it expects drivers to be alert at all times when using the Autopilot features, some drivers treat the Autopilot as having autonomous driving capability with little or no need for human monitoring or intervention. This discrepancy between Tesla’s instructions and driver behavior seems to be a factor in the incidents under investigation.

Another possible factor is how Tesla assures that drivers are paying attention. Earlier versions of Tesla’s Autopilot were ineffective in monitoring driver attention and engagement level when the system is on. The company instead relied on requiring drivers to periodically move the steering wheel, which can be done without watching the road. Tesla recently announced that it has begun using internal cameras to monitor drivers’ attention and alert drivers when they are inattentive.

Another equally important factor contributing to Tesla’s vehicle crashes is the company’s choice of sensor technologies. Tesla has consistently avoided the use of lidar. In simple terms, lidar is like radar but with lasers instead of radio waves. It’s capable of precisely detecting objects and estimating their distances. Virtually all major companies working on autonomous vehicles, including Waymo, Cruise, Volvo, Mercedes, Ford, and GM, are using lidar as an essential technology for enabling automated vehicles to perceive their environments.

By relying on cameras, Tesla’s Autopilot is prone to potential failures caused by challenging lighting conditions, such as glare and darkness. In its announcement of the Tesla investigation, the NHTSA reported that most incidents occurred after dark where there were flashing emergency vehicle lights, flares, or other lights. Lidar, in contrast, can operate under any lighting conditions and can “see” in the dark.

Fallout from the investigation

The preliminary evaluation will determine whether the NHTSA should proceed with an engineering analysis, which could lead to a recall. The investigation could eventually lead to changes in future Tesla Autopilot and its other self-driving system. The investigation might also indirectly have a broader impact on the deployment of future autonomous vehicles; in particular, the investigation may reinforce the need for lidar.

Although reports in May 2021 indicated that Tesla was testing lidar sensors, it’s not clear whether the company was quietly considering the technology or using it to validate their existing sensor systems. Tesla CEO Elon Musk called lidar “a fool’s errand” in 2019, saying it’s expensive and unnecessary.

However, just as Tesla is revisiting systems that monitor driver attention, the NHTSA investigation could push the company to consider adding lidar or similar technologies to future vehicles.

Article by Hayder Radha, Professor of Electrical and Computer Engineering, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Apple’s M1X Mac Mini may launch soon with new design and more ports

Prominent leaker Marc Gurman is at it again with another Apple rumor. This time around he’s corroborating earlier reports of a new Mac Mini with a few more details: it’ll come with more ports, a new design, and the M1X processor. And it’s coming “in the next several months.”

The information comes from Gurman’s Power On newsletter; here’s the relevant quote in full:

The Mac mini is used for more basic tasks like video streaming, but many people use it as a software development machine, as a server or for their video editing needs. Apple knows that, so it kept the Intel model around. Well, expect that to go away in the next several months with a high-end, M1X Mac mini. It will have an updated design and more ports than the current model.

Apple’s Mac Mini was one of its first computers to receive the upgrade to its new ARM architecture, but the basic exterior design has remained unchanged for years, and the M1 chip meant it wasn’t really more powerful than the MacBooks (other than better thermals).

The new M1X Mac Mini is expected to come with a significantly revamped design and better performance. The added ports also show that Apple is might be treating the Mac Mini as more than just a gateway desktop for casual workloads. It could be as versatile as the iMac for those who don’t need the integrated display and would like to save some bucks.

Mac Mini Render by Jon Prosser and RendersbyIan
Credit: Jon Prosser / RendersbyIan
Mac Mini Render by Jon Prosser and RendersbyIan

Leaker Jon Prosser pointed to a new design with a plexiglass top and magnetic power adapter, but we’ve not seen much to back up those reports. I wouldn’t be surprised if its design took a page out of the new iMac, and the M1X chip will probably allow Apple to make the Mini even mini-er.

With new iPhones, iPads, MacBooks, and AirPods also expected soon, it looks like Apple is going to have a busy fall.

Via MacRumors

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Review: The OnePlus Buds Pro are legit AirPods Pro competitors for $150

I haven’t been a huge fan of OnePlus’ various earbuds so far. They’ve either been uncomfortable, had poor controls, or didn’t sound that great. The one thing they’ve always had going for them is a low price tag.

The OnePlus Buds Pro, the company’s first model with noise-canceling — which are clearly ‘inspired’ by the AirPods Pro — finally get it right. Well, mostly. But at $150, it’s hard to fault them.

They’re pretty comfortable

The OnePlus Buds Pro are shaped a lot like the AirPods Pro, with their angled, oval ear tips and a long stem. Also like the AirPods Pro, the OnePlus Buds Pro aren’t trying to create the deepest, tightest seal, instead of focusing on providing an inoffensive fit that should work for most people. Active noise canceling (ANC) is on board, after all, so attenuating sound by passive means isn’t of paramount importance.

OnePlus Buds Pro in white
The Buds come in white too
OnePlus Buds Pro in white

I’ve been using in-ear monitors (IEMs) for years, so I do personally tend to prefer something with a more secure fit that goes deeper into the ear canal, but the OnePlus Buds Pro should be comfortable for most people who think silicone-tipped earbuds are like having plungers in your ears.

I also appreciate that the case is quite compact despite the earbuds themselves being on the larger end of the spectrum. It’s thin too, so it won’t give you a huge lump in your pocket.

The controls are reliable

Also much like the AirPods Pro, the OnePlus Buds operate via squeezing the stem, rather than the usual taps or pressing a button. Although it feels a little awkward and slow at first, I appreciate that this system helps avoid accidental touches. It also means the earbuds won’t randomly activate if they get splashed by some water or sweat.

Otherwise, the controls are pretty basic:

  • Squeeze once to play/pause
  • Squeeze twice to skipforward
  • Squeeze thrice to skip back.
  • Squeeze and hold for one second to switch between ANC On, ANC Off, and Transparency Modes
  • Squeeze for three seconds to enter the ‘Zen Mode Air’ White noise mode (more on this later)

Mercifully, by default the controls are completely accessible with both earbuds, so you still have full access to controls when using just one bud. This is much appreciated as someone who often uses just one earbud when riding my bike (transparency mode isn’t useful when on my bike due to wind noise).

The elephant in the room is the lack of volume control, which you’ll have to do through your phone. This isn’t the biggest deal for me, but it’s not ideal.

Noise-canceling is decent

Long story short: The OnePlus Buds won’t blanket you in silence if you’ve used any high-end noise-canceling products like the Sony WF-1000XM3 or the Bose QuietComfort Earbuds, but they’re better than most.

They attenuate some lower mid and bass frequencies but don’t reduce higher frequencies all that much, so they’re not going to be great for, say, reducing chatter in a crowded restaurant, or especially for quieting individual voices.

OnePlus Buds ProOnePlus Buds Pro

They’ll probably quiet the hum of an airplane engine, but not the baby crying in the next aisle. The headphones come with a ‘smart’ noise-canceling mode, but I just ended up turning it up to the max.

Your best bet to improve isolation further would be to get a stronger passive seal, so you might want to invest in some foam ear tips if noise cancellation is a priority, although the large air vents probably mean they’ll never be kings of noise reduction. Still, some cancelation is better than none, and as someone who doesn’t consider ANC a priority, I could live with the meager attenuation.

They sound fantastic

Perhaps the biggest surprise for this review was just how good the OnePlus Buds sounded. They sound fantastic. Frankly, they sound better than almost any wireless earbuds I’ve tested, whether aimed at audiophiles or not. To my ear, they flat out sound better than Sony’s much more expensive WF-1000XM4 out of the box.

The bass thumps, the highs are clear, the mids are largely free of coloration. They even have a surprisingly decent soundstage for earbuds too, likely aided by the vented design which prevents the pressure you get with bass notes on some IEMs.

I won’t share the headphone measurements I captured as I’m currently reworking my setup, but based on the data and comparisons with other earbuds, the OnePlus Buds Pro seems to track the Harman Curve pretty closely, with perhaps a bit of extra brightness around vocals and brass.

OnePlus Buds Pro
Credit: OnePlus
OnePlus Buds Pro

The Harman Curve is a research-backed target response for headphones that yields a response most people consider to be neutral and engaging — with tonality similar to a pair of good speakers in a room. It may not be the ideal response for everyone, but it’s a great bet for most listeners.

More importantly, if you know you like/hate the Harman Curve’s sound, you’ll know whether these headphones are for you.

The headphones connect via SBC (bad sound quality), AAC (good sound quality), or LDHC (great sound quality, but few phones support it). To be clear, LDHC is not the same as LDAC, the latter being a high-quality Sony technology built into every modern Android phone. OnePlus phones are some of the very few phones I’m aware of with LDHC in western markets.

That said, most people won’t notice the difference between LDHC and AAC though.

The white noise mode is actually pretty cool

Press and hold the earbud stems for 3 seconds, and you’ll enter ‘Zen Mode Air,’ a white noise mode. There are a few different soundscapes available, but it defaults to a forest-y type of sound.

I appreciate this feature I often find it easier to focus on certain tasks when listening to white noise as opposed to music. It’s hard for me to not get too invested in the music sometimes; yes I’ve tried working while listening to classical music, but Vivaldi can be as much of a headbanger as Metallica.

Some extra goodies

This is where I put the stuff I didn’t know where else to fit:

  • The headphones support low latency mode when connected to OnePlus devices.
  • They also support Dolby Atmos on OnePlus devices for spatial audio. I’m not sure if it works with other Atmos-compatible phones, but the feature doesn’t work on my Pixel 4A (which doesn’t offer any kind of spatial audio).
  • Battery life is rated at up to 38 hours including the extra charges in the case.
  • With OnePlus Warp Charge support, you get 10 hours of playback in a 10-minute charge. Most regular USB-C chargers will top them up pretty fast too.
  • The earbuds case also supports wireless charging.
  • They are IP55 water and dust resistant, so they should survive most rainfall.

They’re missing a few things I want

The headphones definitely aren’t perfect. Aside from the relatively weak noise canceling and lack of volume controls, here are some other things to note:

  • There’s no hotword detection for a voice assistant. I’ve been very spoiled by this on earbuds like the Pixel Buds and Sony WH-1000XM4. It’s great when carrying a bunch of groceries, doing dishes, walking my dog, riding my bike.
  • There actually isn’t a way to even access an assistant from the headphones by default. You can customize the left and right earbud to activate it with a triple squeeze, but then it means you can’t rewind on both earbuds independently.
  • The headphones sound great out of the box, but every headphone should come with EQ in case the sound signature isn’t too your liking. Sometimes a subtle change is all it takes.
  • There are no volume controls.

They’re easy to recommend — especially for OnePlus users

OnePlus Buds ProOnePlus Buds Pro

Despite some caveats, the OnePlus Buds Pro are an easy recommendation. They’re comfortable, the controls are mostly sensible, they have active noise canceling, and they have some of the best sound quality I’ve heard from wireless earbuds.

While they might not beat some of the more expensive headphones in terms of features, and the relatively lackluster noise canceling is disappointing, there are also headphones without any noise canceling that sell for more money.

While the earbuds are clearly best for people with OnePlus phones, for everyone else, the OnePlus Buds Pro still get most of the basics right. For $150, they are well worth a listen.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Watch NASA’s stunning new panorama of the Martian landscape

NASA’s Perseverance rover may be hogging the headlines, but its predecessor is also capturing new insights about Mars.

Since August 2012, the Curiosity buggy has been studying whether the red planet could have once supported microbial life.

To mark the rover’s ninth year on Mars, NASA has created a 360-degree tour of Curiosity’s current home on Mount Sharp.

Curiosity is a car-sized Mars rover
Credit: NASA/JPL-Caltech/MSSS
The car-sized rover was designed to explore Mars’ Gale crater.
Curiosity is a car-sized Mars rover

The 5-kilometer tall mountain lies within the 54-kilometer-wide basin of Mars’ Gale Crater. NASA believes the area could contain clues about how the planet dried up.

Spacecraft monitoring the mission show that Curiosity is currently between an area enriched with clay material and another one full of sulfates. The mountain’s layers in this region may reveal how the environment lost its water over time.

“The rocks here will begin to tell us how this once-wet planet changed into the dry Mars of today, and how long habitable environments persisted even after that happened,” said Abigail Fraeman, Curiosity’s deputy project scientist.

NASA’s Curiosity Mars rover has used the drill on its robotic arm to take 32 rock samples to date.
Credit: NASA/JPL-Caltech/MSSS
Curiosity has used a drill on its robotic arm to take 32 rock samples.
NASA’s Curiosity Mars rover has used the drill on its robotic arm to take 32 rock samples to date.

The panorama was created by stitching together 129 photos captured by Curiosity’s Mast Camera.

The colors were then white-balanced to replicate how the landscape would appear under daylight on Earth. You can check it out by clicking on the video atop this article.

The skies in the panorama are relatively clear because the photos were taken during the Martian winter, a time when there’s less dust in the atmosphere.

The image also captures the rover’s next destinations: Rafael Navarro mountain and an unnamed hill that’s taller than a four-story building.

After nine long years on Mars, Curiosity could still steal more of the Martian spotlight from its younger cousin.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

EV charging security is a shit show

The growth of electric vehicles on the road means we need more EV charging stations. But research this month has raised a big old red flag regarding the security of electric vehicle charging. Researchers from Pen Test Partners explored six home electric vehicle charging brands and public EV charging networks and found significant problems. 

They found vulnerabilities in Project EV, Wallbox, EVBox, EO Charging’s EO Hub and EO mini pro 2, and Hypervolt, as well as the public charging network Chargepoint. They also examined EV’s Rolec but found no vulnerabilities. 

All hail the white hat hackers

You’ve got to love white hat hackers. They work tirelessly to find vulnerabilities before the bad guys do. Unbelievably the company they’ve found fault with often only acknowledges their efforts after media reporting. 

For home charging, smart EV chargers allow the car owner to remotely monitor and manage the charge state, speed, and timing of their car charger via an app. The mobile apps all communicate with the charger via an API and cloud-based platform. The chargers are usually connected to the owner’s home Wi-Fi network. 

The researchers found a range of vulnerabilities. They could hack the accounts of millions of EV chargers. In some, they could overtake accounts and turn remote control charging on and off.

In another, they could use the charge point as a remote ‘back door’ into the user’s home network, from where we could potentially compromise further devices in the user’s home.

Some of the chargers had gone old school by using a Raspberry Pi Compute Module. The Pen Testers note:

We love the Pi, but in our opinion, it’s not suitable for commercial use in public devices as it is very difficult to fully secure it against the recovery of stored data.

In the case of the public EV charger, they believe it would be possible to access another user’s account for a free charge. They also note a potentially bigger issue of destabilizing the grid by switching simultaneously switching charges on and off: 

While our power generators make huge efforts to maintain stability, these powerful chargers and security flaws combined have inadvertently created a cyber weapon that others could use to cause widespread power cuts.

Not the first rodeo for EV charger woes

This research is not the first example of security vulnerabilities in EV charging. 

In 2019, security researchers found security vulnerabilities in Schneider Electric’s EVlink Parking charging stations. Hackers could stop a car from charging and prevent anyone else from using the charger. A malicious actor could even unlock the cable while charging. Then, they could walk away with the cable. There was also plenty of opportunities to gain full privileges, add users, change files, and more.

Last year engineers at Southwest Research Institute simulated a malicious attack on an EV charger with a purpose-built spoofing device made with cheap hardware and simple software. Researchers could limit charging costs as well as overcharge and undercharge the battery — the latter could result in big safety problems. But fortunately, the battery management system was able to detect the overcharging and disconnect.

Don’t EV customers deserve better?

EV at home electric vehicle charging
Charging your EV at home can create home network vulnerabilities
EV at home electric vehicle charging

We know about all these problems due to the mighty work of researchers. But hacking is a genuine threat in an industry that’s scaling rapidly. Worse, the industries collectively fail to learn from the legacy shit show that is IoT security. 

Beyond controlling the charging functionality itself, hacking can result in identity theft, fraud, and malware insertion. It’s disturbing that white hats found some of the most rudimentary security elements lacking. These include the absence of API authorization and firmware signing. 

EV charging is the poster child of a security problem with potential attacks via mobile apps, firmware updates, and physical access points. 

EV charging security is a shit show While the safety issues of electric vehicles are covered mainly by the international standard ISO 6469, there is no comparable global EV security standard. Developing one requires collaboration between different parties such as automakers, charge point operators, manufacturers, utility companies, and third-party vendors. Each of these industries represents an entry point for hackers.  

The vulnerabilities specified in this article are fixed. However it won’t be long until another security risk is exposed — hopefully not in an act by cybercriminals. 

Why emotion recognition AI can’t reveal how we feel

The growing use of emotion recognition AI is causing alarm among ethicists. They warn that the tech is prone to racial biases, doesn’t account for cultural differences, and is used for mass surveillance. Some argue that AI isn’t even capable of accurately detecting emotions.

A new study published in Nature Communications has shone further light on these shortcomings.

The researchers analyzed photos of actors to examine whether facial movements reliably express emotional states.

They found that people use different facial movements to communicate similar emotions. One individual may frown when they’re angry, for example, but another would widen their eyes or even laugh.

The research also showed that people use similar gestures to convey different emotions, such as scowling to express both concentration and anger.

Study co-author Lisa Feldman Barrett, a neuroscientist at Northeastern University, said the findings challenge common claims around emotion AI:

Certain companies claim they have algorithms that can detect anger, for example, when what really they have — under optimal circumstances — are algorithms that can probably detect scowling, which may or may not be an expression of anger. It’s important not to confuse the description of a facial configuration with inferences about its emotional meaning.

Method acting

The researchers used professional actors because they have a “functional expertise” in emotion: their success depends on them authentically portraying a character’s feelings.

The actors were photographed performing detailed, emotion-evoking scenarios. For example, “He is a motorcycle dude coming out of a biker bar just as a guy in a Porsche backs into his gleaming Harley” and “She is confronting her lover, who has rejected her, and his wife as they come out of a restaurant.” 

The scenarios were evaluated in two separate studies. In the first, 839 volunteers rated the extent to which the scenario descriptions alone evoked one of 13 emotions: amusement, anger, awe, contempt, disgust, embarrassment, fear, happiness, interest, pride, sadness, shame, and surprise. 

The researchers used the Facial Action Coding System, which specifies a set of action units that each represent the movement of one or more facial muscles.
Credit: Nature Communications
Experts coded the photos using the Facial Action Coding System.
The researchers used the Facial Action Coding System, which specifies a set of action units that each represent the movement of one or more facial muscles.

Next, the researchers used the median rating of each scenario to classify them into 13 categories of emotion.

The team then used machine learning to analyze how the actors portrayed these emotions in the photos.

This revealed that the actors used different facial gestures to portray the same categories of emotions. It also showed that similar facial poses didn’t reliably express the same emotional category.

Strike a pose

The team then asked additional groups of volunteers to assess the emotional meaning of each facial pose alone.

They found that the judgments of the poses alone didn’t reliably match the ratings of the facial expressions when they were viewed alongside the scenarios.

Barrett said this shows the importance of context in our assessments of facial expressions:

When it comes to expressing emotion, a face does not speak for itself.

The study illustrates the enormous variability in how we express our emotions. It also further justifies the concerns around emotion recognition AI, which is already used in recruitment, law enforcement, and education

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Free Guy’s philosophy: Are we all just part of a grand simulation?

Have you ever wondered if you’re just a character in some elaborate simulation? You shake the thought off because you’re a real person, living a real-life, in a concrete reality. But can you be certain that you are? Isn’t it at least possible that your body and that the world around you are nothing but illusions?

This is the conundrum that Guy, played by Ryan Reynolds, finds himself in the middle of in the film Free Guy. However, in this case, he is, in fact, an NPC (non-player character) in an open-world computer game called Free City. He is a character in a simulation, and this realization changes his “life” forever.

Many of us have wondered if we, like Guy, are just NPCs in some game. A skeptical hypothesis like this was first raised by the 17th-century French philosopher René Descartes. He didn’t imagine he might be an NPC, of course. He imagined that an evil demon might be deceiving him into thinking the world around him was real when it was not. But evil demons are considered a bit passé these days.

In 20th-century philosophy, the favored alternative was to imagine that we might be brains in vats hooked up to electrodes being deceived by nefarious neuroscientists feeding our experiences into us via electrical impulses. This is (more or less) the basic premise of the film The Matrix. But now even brains in vats are old hat.

Contemporary philosophy instead asks us to imagine that we are living in a computer simulation and that our minds themselves are mere emulations that run on computer code. This hypothesis has been taken seriously by many philosophers and scientists, with some arguing that the hypothesis is not only possible but has a good chance of being true. So, the question of whether you are in a situation similar to Guy’s is a genuine one worth thinking over.

Do we have free will?

So if we are like Guy and living in a computer simulation, what becomes of our free will?

In the movie, Guy certainly feels like he has free will, but admits that his thoughts and behavior is down to his programming. And there certainly seems to be something right about this. If our minds were nothing but a computer program running on a server somewhere, then it’s hard to see how we could have any real control over what we think and do. Everything would be determined by our programming.

But now we can take this a step further and ask: what is the difference between a mind that runs according to a program in a computer, and one that runs according to biological laws in a brain?

Guy has no free will because his thoughts and actions are the result of electronic operations going on inside a computer that he has no control over. But, our thoughts and actions are the result of biological operations going on inside our brains, and we have no control over those either. So, it seems, whether we are in a computer simulation or the real world matters not. Either way, we lack free will.

There might be some hope for both Guy and us, however. Perhaps Guy’s programming and our neurology merely sets certain parameters within which free action is still somehow possible, as some (known as libertarians) think. Or perhaps free will consists in something other than being able to do otherwise than we do, as others (known as compatibilists) think.

Can computer programs be conscious?

A traditional view that was held by Descartes, and is still held by some contemporary philosophers (such as Richard Swinburne), is that consciousness does not spring forth from the operations of our biological brains at all. The mind is on this view entirely distinct from the brain, but the two nevertheless interact. So conscious thoughts occur in a spiritual mind and are then beamed across to the physical brain.

But if like me, you find such a view implausible and think that consciousness arises from the operations of biological material, then it seems one must admit that a genuine conscious mind could arise from the operations of non-biological materials too, like those a computer is made from. And if this is right, given the rapid increase in computing power that we are now seeing, and the development of artificial intelligence, the day when such a mind arises might not be far off.

Free Guy
Credit: 20th Century Studios/Allstar
Guy might be code but he feels happiness and sadness and even forms genuine connections.
Free Guy

The consequences of there being conscious computer minds are far-reaching. One such consequence is the question of the moral status of such minds, which is raised in Free Guy. If they can have desires and emotions, be happy or sad, and fall in love, all of which Guy does in the movie, then they certainly seem to warrant as much moral respect as human beings.

But then it would seem to be morally wrong to interfere with their lives, such as resetting their programming, which would be akin to murder. As such, it would seem that the legal frameworks that protect our rights would have to be extended to protect theirs too. How to do this is a complicated issue that philosophers and legal experts are only just beginning to tackle.The Conversation

The Conversation

Article by Benjamin Curtis, Senior Lecturer in Philosophy and Ethics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How to build a customer-obsessed company

For startup founders, passion for the product comes naturally. You’re probably your first customer, and so it’s easy to build something that you think customers will want. But when it comes to growing your business, designing great products requires more than your own intuition. It requires establishing a customer-obsessed mindset.

In my first startup, we were building a platform to buy and sell solar renewable energy credits. It was a nascent market driven by new laws requiring the adoption of solar energy. Our first thought was to build an online auction for buyers and sellers, like eBay, but for solar energy. Simple. If you build it, they will come, right?

Cultivating a relationship with your customers can be incredibly difficult. It takes diligence, patience, and processes that can be hard for founders—and at the heart of it, it’s all about being vulnerable and committed to listening to customers. We told everyone we were a startup trying to figure this business out, but in a way that was open, honest, and transparent.

A few months after we launched, we had a small number of buyers and sellers, but the hockey stick growth wasn’t happening. I had started my career in customer support, so one of the first things I did was set up a toll-free number and support email. We published that everywhere and took every phone call and every email. 

We listened. And we learned. We learned how difficult it was for home and small business owners to register their solar energy systems. This led us to develop a subscription service along with our trading platform which made it easier for our users to do this. Eventually, one in seven installations in the market we served was registered on our platform.

Once we created that service, we learned more about how difficult it was for solar installers to explain everything to their customers. So we developed a channel offering our service alongside their installation. We then developed a network of over 300 partners who we initially thought we were competing with for business.

Sometimes, it took us a long time to listen. We were brazen in our thinking that all transactions would happen online when we courted large energy companies to be buyers in our marketplace. We were an online technology company incubated at Stanford… so when buyers suggested that we transact with them over AOL Instant Messenger, we didn’t take it seriously. But once we listened, we completed our business model and reached the scale to be profitable.

In these three different relationships, what we thought going in was very different from how our business evolved. When you’re building a startup, there is not a lot of margin for error. Each piece that came together made the difference between success and failure. 

So why is being vulnerable with your customers vital to your startup’s future? Simply because those conversations—as awkward and painful as some of them will be—generate invaluable feedback that will help you build a product that truly serves your users’ needs. Here are a few ways to do this:

Talk to every customer

As the face of the company, the founder needs to be available to those early customers. Give out your email or WhatsApp so customers know they can talk to you. Model this behavior in front of your team, however small or large.

Pay attention to negative feedback

Check your social media and online reviews, and investigate every complaint. Those gripes are gifts, and they’re how your customers will teach you. Dig into the details to find meaningful insights, and don’t ignore them because they tell a story you don’t want to hear or they’re hard to address. Pay attention for precisely those reasons.

Amplify the voice of your users

Don’t stop with just talking to customers—make their voices heard. For example, you could use software like Airtable or Trello to create internal dashboards that display customer feedback (as well as other metrics). Also, consider a Slack integration where customer issues are automatically posted to a shared Slack channel—it’s a great way to get product managers, engineers, and other employees thinking about the customer experience.

Base product iterations on customer feedback

When developing your product roadmap (and later, when you’re hashing out the product requirements), be sure to incorporate user feedback. Analyze how they’re using your product, and embrace agile development so you can shift direction quickly without losing large amounts of development time.

4 ways to act on customer feedback as a founder 

We recently started a Zendesk for Startups Program where we share strategic advice with an exclusive community of up and coming startups. Here are four essential tips we always share with founders for getting customer experience right:

  1. Follow-up with users who give you a negative rating. This is a great way to build trust with customers and retain their business.

2. Analyze bad ratings that include comments. While it might be tempting to spend time on the glowing reviews, you’ll garner more actionable information from those customers who are unhappy with your product or service.

3. Meet every week to discuss customer satisfaction outcomes. Set time aside for the team to analyze negative comments and brainstorm about ways to remedy the underlying causes.

4. Group negative comments by cause—and look for trends.Doing this will help you identify problem areas such as long ticket resolution times, poor product documentation, or bugs/unexpected product behavior.

The next time you’re debating how to balance the time you’re spending on your product vs your customers, remember this quote from Zendesk’s CEO & Founder, Mikkel Svanne: 

Quote by Zendesk CEOQuote by Zendesk CEO

Don’t forget to talk to your customers!

Video piracy is booming — thanks to the explosion of streaming services

With the launch of Paramount+, Australian consumers of video streaming are arguably drowning in choice.

We now have more than a dozen “subscription video on demand” services to choose from, with dozens of more options available worldwide to anyone with a VPN to get around geoblocks.

But all this competition isn’t actually making things easier. It’s likely all this “choice” will see more of us turning to piracy to watch our favorite films and televisions shows.

The problem is that services are competing (at least in part) through offering exclusive content and original programming.

Paramount+, for example, is offering content from Paramount Pictures and other entertainment companies owned by entertainment conglomerate ViaComCBS. These include Showtime, Nickelodeon, and Comedy Central. Its catalog ranges from the Indiana Jones and Harry Potter movies to popular TV shows Dexter, NCIS, and The Big Bang Theory.

This content may have been available on your preferred services. But the end goal — as with Disney+ and others — is for all ViaComCBS-owned content to be exclusive to Paramount+.

Here the problem for the consumer becomes evident. How many subscription services do you want to join? Subscribing to the six most popular video streaming services — Netflix, Stan, Disney+, Amazon Prime Video, Binge, and Apple TV+ — will cost you about $60 a month. How much more are you willing to pay for a new service to watch your favorite film or TV show only available on that service?

The temptation to turn to piracy is clear.

Losing aggregation

The emergence of video streaming services such as Netflix was heralded as an effective way to curb illegal downloads. But how Netflix did this at first was in aggregating content. It provided a convenient, cost-effective, and legal way to access a large catalog of TV shows and movies; and consumers embraced it.

But as the streaming market has developed, the loss of content aggregation appears to be leading back to piracy.

As an example, according to analytics company Sandvine, the file-sharing tool BitTorrent accounted for 31% of all uploads in 2018; in 2019 it was 45%. As Sandvine explained:

When Netflix aggregated video, we saw a decline in file sharing worldwide, especially in the US, where Netflix’s library was large and comprehensive. As
new original content has become more exclusive to other streaming services, consumers are turning to file sharing to get access to those exclusives since
they can’t or won’t pay money just for a few shows.

This trend has been amplified by COVID-19 lockdowns, with traffic to illegal TV and movie sites reportedly surging in 2020. A survey for the Australian Government found 34% of respondents consumed some form of illegal content in 2020.

Lessons from music

Why should this be happening more for TV shows and movies and not for music?

There’s an important difference. Services such as Spotify, Apple Music, and Tidal offer you just about all the music there is. You don’t need to sign up to one to listen to The Beatles and another to hear Taylor Swift. You need only sign up for one.

Music streaming services have the benefit of being a one-stop shop
Credit: Zarak Khan/Unsplash
Music streaming services have the benefit of being a one-stop shop.
Music streaming services have the benefit of being a one-stop shop

Research has shown that a consumer’s willingness to pay is often anchored around the initial information they are exposed to. Viewers accustomed to paying for one streaming service might be reluctant to pay for as many as six.

In a survey of about 3,000 US TV watchers in February, 56% said they felt overwhelmed by the number of streaming services on offer.

Deloitte’s Australian Media Consumer Survey 2019 found that almost half of streaming video-on-demand subscribers said it was hard to know what content is available on what service. Three-quarters said they wanted the content in one place, rather than having to hunt through multiple services.

Seeking a one-stop-shop

Although it is not yet clear how many video streaming services the Australian market can support, high-profile failures both at home and overseas should serve as a warning.

But in the absence of a legal one-stop-shop for TV and movies, people will take matters into their own hands.

Illegal streaming platforms that aggregate content from multiple video streaming services into a single interface are becoming more widespread. Such services typically use an open-source media player, coupled with cheap jailbroken hardware and a VPN to access a plethora of illegal entertainment.

Until the industry offers a legal alternative to such platforms, the popularity of such services is only likely to grow.The Conversation

The Conversation

Article by Paul Crosby, Lecturer, Department of Economics, Macquarie University and Jordi McKenzie, Associate Professor, Department of Economics, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How to stop archived WhatsApp chats from popping back to your inbox

Welcome to TNW Basics, a collection of tips, guides, and advice on how to easily get the most out of your gadgets, apps, and other stuff.

If you’re using WhatsApp as your primary chat app, it lets you archive old conversations, so they don’t appear in your inbox. However, if there’s a new message in any of those conversations, you will see it back in your inbox.

Thankfully, WhatsApp has changed that feature to let you keep archived messages from popping back to your inbox — even if there’s a new chat. Here’s how you can enable it:

  • Open WhatsApp on your iPhone or Android device.
  • Head to Settings > Chat.
  • Turn on the “Keep Chats Archived” toggle.
WhatsApp's new feature to keep archived chats in the archived box even if there's a new message
WhatsApp’s new feature to keep archived chats in the archived box even if there’s a new message
WhatsApp's new feature to keep archived chats in the archived box even if there's a new message

Once you enable this, all your archived chats will stay in the archived box. This enables you to remove annoying groups that you can’t exit — like your  extended family group — from your inbox.

While you’re at it, you might want to check out our guide on how to send ‘view once’ images on WhatsApp.

TikTok partners with blockchain startup — and this could be good news for creators

On August 17, TikTok announced it will partner with Audius, a streaming music platform, to manage its expansive internal audio library.

Audius was not the obvious choice for partnering with the short video giant. A digital music streaming startup founded in 2018, it isn’t one of the major streaming services such as Apple Music or Spotify.

And, even more unusual, Audius is one of the first and only streaming platforms run on blockchain.

Remind me, what is blockchain?

Blockchain is a technology that stores data records and transfers values with no centralized ownership.

Transaction data on these systems are stored as individual “blocks” that sequentially link together when connected by timestamps and unique identifiers to form “chains”.

For music, this means individual songs are assigned unique codes, and clear records are stored each time a song is played. It can also mean more streamlined and transparent payments.

Platforms like Spotify and Apple Music use a “pro-rata” model to pay artists. Under this system, artists get a cut of the platform’s overall monthly revenues generated from ads and subscription fees, as calculated by how many times their music was played.

The pro-rata model has been criticized by independent artists and analysts for maintaining a “superstar economy” in which the most popular artists claim a majority share of monthly revenue.

Facilitated by its blockchain system, Audius uses a “user-centric” model, where artists receive revenues generated by the individual users who stream their music directly.

That is, payments are generated for artists more directly from people streaming their songs.

While the biggest streaming players have refused to abandon pro-rata payments, Deezer — a French music streaming service with around 16 million monthly active users — has taken the first steps towards user-centric payments.

Now, it seems TikTok may be poised to follow.

And how does TikTok work?

With over 800 million monthly active users, TikTok is the world’s largest short video platform and has become a significant force in the global music industry.

Once on TikTok, songs can be used as background for short videos — and can go viral.

Currently, putting independent music on TikTok requires the help of a publisher or companies like CD Baby or TuneCore that charge a fee or take a cut of revenues.

Audius will enable independent artists to upload music directly to TikTok. This would be a boon for musical artists given the centrality of music on TikTok and the platform’s propensity for failing to properly credit artists for their work.

Recent research into blockchain systems in book publishing suggests the technology can lead to improved tracking of intellectual property and increased royalty payments to independent authors. The same may be true for independent musicians on TikTok, but a history of overstated claims and unfulfilled promises warrants measured expectations.

Is this a fairer payment system?

So far, TikTok has made no indication the company will use Audius’ blockchain technology to implement a user-centric revenue model, but the incorporation of royalty payments per video plays is a reasonable expectation.

When artists are paid from a platform like Spotify, they are paid in money. But Audius conducts blockchain transactions using its in-house cryptocurrency called $AUDIO.

Cryptocurrencies are virtual currencies stored on public ledgers rather than in banks and used to make transactions facilitated by blockchain systems.

Audius’ co-founder claims most users are unaware or uninterested in the cryptocurrency underpinning the platform — but the price of $AUDIO spiked on coin markets immediately following the announcement.

Because cryptocurrencies operate on a volatile market, if artists were to collect payments in $AUDIO it might be impossible to predict whether their income would amount to fair compensation.

Artists’ income won’t only be tied to how often their music is listened to, but also to market speculation.

So, what does this mean for artists?

Some independent artists may be wary to handle payments through a decentralized digital currency subject to fewer regulations and unpredictable value fluctuations — not to mention the environmental costs associated with mining and maintaining cryptocurrency.

And a user-centric model is not without flaws. For the model to be truly tested requires full cooperation from record labels, music publishers, and digital platforms.

Anything less would create fundamentally unequal conditions for artists using different services.

Even TikTok isn’t putting all its eggs in the blockchain basket. In June 2020 TikTok established partnerships with major labels and Indie consortia for music distribution, and in July 2021 TikTok announced a new partnership with Spotify to offer premium services exclusive to European artists.

But, after years of sensational claims and unfulfilled promises that blockchain will transform the future of the music industry, TikTok has taken a tangible step towards uncovering what that future might actually look like for everyday artists.The Conversation

The Conversation

This article by D. Bondy Valdovinos Kaye, Assistant researcher, Queensland University of Technology is republished from The Conversation under a Creative Commons license. Read the original article.

Ignore what you’ve heard: techies make great CEOs

Data nerds, computer geeks, science morons, I’m speaking to you. It’s the ever-prevailing cliché: the antisocial introverts who spend their days hacking away at some nerdy project that nobody understands. The freaks that push the frontiers of tech every day but still can’t keep up with the Kardashians.

The cliché goes further. If techies lack basic human skills like communicating effectively or cracking a funny joke, then they won’t make good managers. And don’t even think of appointing such people as a CEO.

Of course, this is a stereotype. Most techies I know — including myself — are interesting, multi-faceted people with exciting hobbies and beautiful personalities. Most techies I know score as high in human skills as they do in their area of technical expertise. Most techies I know would be fantastic managers and CEOs.

But that’s not the point. The underlying problem is that these clichés exist and that enough people still believe in them. That’s what is called the boffin fallacy: the belief that techies can only be good at tech, and cannot excel in other domains.

As a consequence, techies get encouraged to pursue an academic career or to stay in their companies’ R&D departments. The prevailing idea among many investors, advisors, and even colleagues is that techies are not as qualified as MBAs for launching their own company or embarking on a corporate career.

This culture of keeping techies out of business is especially strong in Europe and the Middle East. But it has some foothold in the US and parts of Asia, too.

This culture, however useful it may have been in the past, is causing serious harm. Not only does it push too many career paths into pre-modeled shapes. It also hurts the overall economy and its capacity to innovate. It’s about time we debunk the myth that techies can’t be great CEOs.

The economic growth of the last decades has been fueled by tech

It’s a fact: without technological innovation, we would be nowhere close to the living standard that we have attained. For the last two decades, much of the growth of the biggest stock indexes is due to tech companies. At the moment, this is more apparent than ever: without the tech giants, the stock market wouldn’t have done much worse during the ongoing pandemic.

When you think about who is leading these stellar-performing companies, they’re all holders of a tech degree. Larry Page, Sergei Brin, Jeff Bezos, and Mark Zuckerberg are all techies turned entrepreneurs. That’s no coincidence.

That’s not to say that less technical people can’t lead their companies to outstanding growth. Tim Cook, for example, has been business-oriented since the days after he’d earned his bachelor’s degree in industrial engineering. Other career CEOs do similarly remarkable jobs.

Nevertheless, the sheer ubiquity of techies in the C-suites of top-performing companies proves that they’re not incapable in the business world. Funnily enough, this phenomenon often doesn’t serve as an example to other companies and investors.

Rather, they see these CEOs with a tech background and extreme success as rarities and conclude that the average techie isn’t capable of anything like that.

The boffin fallacy

The boffin fallacy, in short, is the false belief that techies — typically people who have a degree in STEM, and who work in a tech-related area — are boffins. A boffin, data nerd, computer geek, or science moron, is believed to be incapable of doing marketing, finance, human resources, and any other business-related activity.

Most people acknowledge the existence of techies that excel at business, like the CEOs mentioned above. But they think that those are outliers and that the statistical norm is that all techies are boffins.

This couldn’t be further from the truth. Not only do techies lead the top-performing companies of today. History is equally full of examples of tech entrepreneurs.

Benjamin Franklin, for example, invented lightning rods, bifocals, the iron furnace stove, a carriage odometer, and the harmonica. He also was the owner of a print shop, a newspaper, and a general store by the time he was 24, and eventually became one of the wealthiest men of his time.

Thomas Edison, inventor of the lightbulb, is another stellar example. He didn’t only power scientific breakthroughs, but also brought investors like J. P. Morgan on board, and distributed his devices to the masses.

Other examples include George Eastman, Marie Curie, and many more. These are no statistical outliers — they’re a clear demonstration that techies are capable of being entrepreneurs.

Why techies make good CEOs

In a world where decisions are based more and more on data, and where analytic capabilities and quantitative rigor gain more and more momentum, leaders with a technological background have a clear edge.

People without a tech background often put their soft skills forward as an advantage. Yet they fail to realize that soft skills are relatively easy to pick up and learn on the job, while quantitative skills take years of study.

I’m not saying that humanities are less skillful or sophisticated than sciences. I have enormous respect for people who are in the humanities — in fact, I almost pursued a degree in classic philosophy myself.

However, an engineer working on microprocessors won’t find it that hard to understand how to manage people, set up the financial system of a company, or do the legal stuff. Conversely, an HR manager or an accountant won’t be able to contribute a thing to the architecture of a microprocessor unless they’ve taken a few classes.

That’s not to say that techies know everything about humanities. They, too, get lost when two philosophers discuss the ins and outs of a passage in Aristotle’s Proverbs. It just so happens that microprocessors contribute more to today’s economic growth than the works of Aristotle.

In a data-driven economy, techies clearly have an advantage when it comes to their skillset. Not only can they make use of it when it comes to the product of a tech company. They can also learn the business side of things at least as quickly as a non-techie — if not faster, since business operations are becoming more and more quantitative, too.

Credit: Kumpan Electric / Unsplash
Techies are extremely skilled people. However, there are cases when they’d better stay at the lab.

Tech founders are the better CEOs, but they’re quieter, too

VC capital firm Andreessen Horowitz, which backs companies such as Stripe, Lime, and Airbnb, puts it in excellent words: tech founders are often better than hired CEOs because they know the company from the inside out, they’re more courageous when it comes to changing the business strategy, and because they’re committed to the long term.

The first point is pretty obvious: the founder was there from day zero, they hired the first employees, and know the product in all its details. They also know the customer base pretty well and know about all the strengths and weaknesses of the company. This puts them in a unique position of expertise.

The second point comes as a consequence of the first: since founders made the base assumptions upon which the company operates, they’re more courageous when it comes to change them.

For example, when Steve Jobs rejoined Apple, he pivoted it from a computing to a personal product company. In hindsight, this was a genius move, but at the time most people thought he was mad. He had the courage because he’d built Apple in the first place, and because he knew what market needs the company was capable of serving.

The third point, long-term commitment, may not apply as much as it used to. These days, many companies get founded with a possible exit in mind. There are, however, many other founders who view their company as their life’s work. Not only are these types of founders less willing to sell their companies; they’re also more willing to make short-term sacrifices for long-term gains.

If these tech founders happen to confirm the cliché of being more introverted than their corporate counterparts, so be it. Research shows that introverts are better leaders anyway. Business success, so it seems, doesn’t depend on how loudly you talk, but on how good your decisions are.

How to tell if a techie isn’t fit for the C-suite

From this perspective, it seems almost absurd to appoint CEOs to tech companies if they don’t have a technical background themselves. Indeed, the research backs up this standpoint. There are only a few situations where it might be better to replace a technical founder with a career CEO.

  • They don’t listen to advice

Some founders are so in love with their own ideas that they don’t want other people to mess around with them. As a result, they might block out all advice.

Note that I didn’t say that they don’t follow advice. Some of the greatest business decisions have been taken against all advice. From Mark Zuckerberg who decided not to sell Facebook when he had the opportunity, to Reed Hastings who changed the DVD-service Netflix to a streaming service, tech history is full of decisions where founders preferred to trust their gut more than their board.

Founders should take advice into account, however. Even if they finally decide against it, they should at least have thought through all the options. Founders who don’t listen to anybody have slim chances of succeeding.

  • They don’t understand business goals

A techie-turned-entrepreneur can’t live only for their product; they need to understand the business side, too.

It’s not like in academia, where researchers throw a party if they’ve secured a research grant for the next two years. At this point, they don’t need to think about money until the next time they apply for a grant.

In industry, once you’ve secured funding, you get to work. You can’t take investments forever — instead, you want to be profitable at some point. Some founders are so in love with product development that they fail to recognize this extra dimension. If this is a chronic problem, it might be better to hire somebody to handle the business side of things.

  • They understand business goals but fail to implement them

Some founders — and, frankly, some managers and CEOs from all walks of life, too — seem unable to build a culture where goals and milestones get reached. A culture where every employee feels welcome and accepted, and where teams work productively and effectively.

Worse even, they might create a culture that is dominated by fear. This can lead to situations where employees don’t talk about problems anymore because they’re scared of being reprimanded. Eventually, this kind of leadership creates places where nobody wants to work.

The good news is that this last problem can be resolved in many cases through additional training. But again, if the founder seems unable to learn for a prolonged period of time, it might be better to let them to the product and hire someone else for business operations.

The bottom line: it’s still hard to make it, but confidence in techies is growing

Although there are many points indicating that tech founders are the better leaders, the boffin fallacy persists. That’s bad news for techies because it means that however qualified you are, investors, advisors, and even colleagues just might not believe in you.

As a founder, however, it’s important to have people who believe in you. Not only does this boost your own confidence — and believe me, without confidence it’ll be hard to make it anywhere. If customers don’t believe in you and your product, why should they buy your product? And if investors don’t believe in you, why should they ever hand you a check?

The status quo is that the boffin fallacy is everywhere you look. And as long as this persists, we’re pushing perfectly qualified people into R&D departments and research labs instead of letting them change the world.

There is a silver lining though: in Europe, as in parts of the Middle East and Asia, US investors are getting more and more present. This may be an indicator that confidence in tech founders is growing in Europe, the sad heart of the boffin fallacy, and other parts of the world.

It’s still hard to make it as a techie. To some degree, it will always be hard to make it anywhere. But slowly, people seem to be letting go of the boffin fallacy.

Times are changing, folks.

This article was written by Rhea Moutafis and was originally published on Start it Up. You can read it here.

AI can now identify footprints — but forensic experts won’t get fired just yet

We rely on experts all the time. If you need financial advice, you ask an expert. If you are sick, you visit a doctor, and as a juror you may listen to an expert witness. In the future, however, artificial intelligence (AI) might replace many of these people.

In forensic science, the expert witness plays a vital role. Lawyers seek them out for their analysis and opinion on specialist evidence. But experts are human, with all their failings, and the role of expert witnesses has frequently been linked to miscarriages of justice.

We’ve been investigating the potential for AI to study evidence in forensic science. In two recent papers, we found AI was better at assessing footprints than general forensic scientists, but not better than specific footprint experts.

What’s in a footprint?

As you walk around your home barefoot you leave footprints, as indentations in your carpet or as residue from your feet. Bloody footprints are common at violent crime scenes. They allow investigators to reconstruct events and perhaps profile an unknown suspect.

Shoe prints are one of the most common types of evidence, especially at domestic burglaries. These traces are recovered from windowsills, doors, toilet seats and floors and may be visible to or hidden from the naked eye. In the UK, recovered marks are analysed by police forces and used to search a database of footwear patterns.

The size of barefoot prints can tell you about a suspect’s height, weight, and even gender. In a recent study, we asked an expert podiatrist to determine the gender of a bunch of footprints and they got it right just over 50% of the time. We then created a neural network, a form of AI, and asked it to do the same thing. It got it right around 90% of the time. What’s more, much to our surprise, it could also assign an age to the track-maker at least to the nearest decade.

A series of footprints with a heat map over them.
The footprints analyzed by the Bluestar AI, with a heat map over them suggesting areas of ambiguity. Matthew Bennett, Author provided
A series of footprints with a heat map over them.

When it comes to shoe prints, footwear experts can identify the make and model of a shoe simply by experience – it’s second nature to these experts and mistakes are rare. Anecdotally, we’ve been told there are fewer than 30 footwear experts in the UK today. However, there are thousands of forensic and police personnel in the UK who are casual users of the the footwear database. For these casual users, analysing footwear can be challenging and their work often needs to be verified by an expert. For that reason, we thought AI may be able to help.

We tasked a second neural network, developed as part of an ongoing partnership with UK-based Bluestar Software, with identifying the make and model of footwear impressions. This AI takes a black and white footwear impression and automatically recognises the shape of component treads. Are the component treads square, triangular or circular? Is there a logo or writing on the shoe impression? Each of these shapes corresponds to a code in a simple classification. It is these codes that are used to search the database. In fact the AI gives a series of suggested codes for the user to verify and identifies areas of ambiguity that need checking.

In one of our experiments, an occasional user was given 100 randomly selected shoe prints to analyse. Across the trial, which we ran several times, the casual user got it right between 22% and 83% of the time. In comparison the AI was between 60% and 91% successful. Footwear experts, however, are right nearly 100% of the time.

One reason why our second neural network wasn’t perfect and didn’t outperform real experts, is that shoes vary with wear, making the task more complex. Buy a new pair of shoes and the tread is sharp and clear but after a month or two it becomes less clear. But while the AI couldn’t replace the expert trained to spot these things it did outperform occasional users, suggesting it could help free up time for the expert to focus on more difficult cases.

Will AI replace experts?

Systems like this increase the accuracy of footwear evidence and we will probably see it used more often than it is currently – especially in intelligence-led policing that aims to link crimes and reduce the cost of domestic burglaries. In the UK alone they cost on average £5,930 per incident in 2018, which amounts to a total economic cost of £4.1 billion.

AI will never replace the skilled and experienced judgement of a well-trained footwear examiner. But it might help by reducing the burden on those experts and allow them to focus on the difficult cases by helping the casual users to identify the make and model of a footprint more reliably on their own. At the same time, the experts who use this AI will replace the ones who don’t.The Conversation

The Conversation

This article by Matthew Robert Bennett, Professor of Environmental and Geographical Sciences, Bournemouth University and Marcin Budka, Professor of Data Science, Bournemouth University, is republished from The Conversation under a Creative Commons license. Read the original article.

Someone ported Super Mario 64 to play in a browser (again)

Super Mario 64 is the first videogame I can remember really playing seriously, and at the time its 3D world and movement was a technological marvel.

Now you can play one of Nintendo‘s best from pretty much any modern Bowser browser — as if it were something as basic as Pong or Tetris. That’s technology for you.

Mario 64 in a browserMario 64 in a browser

Someone managed to stick what appears to be a complete version of Mario 64 in a browser (via NintendoLife, PureXbox). And apparently, it’s been around since April and somehow Nintendo has not managed to take it down yet. It seems to work with a variety of modern browsers — both desktop and mobile.

In fact, the reason I learned about it in the first place is that PureXbox pointed out the game works in the Xbox’s browser:

You can use either a keyboard or a Bluetooth controller to play the game. There’s no way to customize the controls that I can see, but the game feels pretty responsive even using a keyboard. You can also save your progress too, so it can serve as more than a casual one-time escapade.

Of course, if you want to experience the game properly, the best way is probably to just buy Super Mario All-Stars for the Switch. Or, you know, find a Nintendo 64.  It’s a game I’ve returned to every few years because it remains one of the most joyful expressions of 3D movement in gaming.

Truth be told, this isn’t the first time Mario 64 has shown up in a browser; there was even a Unity-powered HD remake for a while back in 2016, but that was a single level. The fact that this site has managed to stay up since at least April is rather impressive, but I imagine the renewed attention means Nintendo will see that it doesn’t stay up for much longer.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In – and you can subscribe to it right here.

Analysis: Tesla’s humanoid robot might be Elon’s dumbest idea yet

Let’s just dive right in. Tesla held its much-ballyhooed “AI Day” yesterday which, as Shift’s Cate Lawrence pointed out earlier, was little more than a job fair.

There were, however, a couple of interesting reveals.

The good news was the company’s new AI chips. According to the supposed specifications and obligatory hyperbole, Tesla’s AI chips will be the world’s most powerful.

It’s at least plausible that the AI team at Tesla could pull off such a feat. The race to develop a better AI chip – currently dominated by Intel, AMD, NVIDIA, and the like – is a perpetual game of leapfrog.

But let’s talk about the Tesla robot.

Basic specifications of a Tesla bot
Credit: Tesla
Basic specifications of a Tesla bot
Basic specifications of a Tesla bot

TNW’s Ivan Mehta covered the announcement and all the machine’s specifications earlier today.

So, for this analysis, I’m going to give you my conclusion first, then we’ll work our way back from there.

Closing thoughts: Everybody wants this to be real. Me, you, the entire writing staff of the Simpsons (wait for it, I’m sure it’s coming), even Tesla’s competitors.

We all want Rosie the Robot to be real.

But here’s the truth laid bare: this is a hustle. The Tesla Robot is Elon Musk at his PT Barnum-esque best. He’s promising everything you want and daring you to dream along side him while he picks your pocket.

Okay, now that we’ve gotten that out of the way, allow me to explain why.

The kind of AI it would take to power a fully autonomous robot doesn’t exist. Rosie the aforementioned Robot, from the Jetsons, is not real. In fact, and in spite of my enthusiasm for the idea, I doubt we’ll see a fully-autonomous robot in the next 30 years.

Tesla claims that it’s going to build a robot with the same AI brain as its cars. The robot would operate on a vision system using multiple cameras. Nothing about that idea even comes close to the technology already in the space.

But let’s continue anyway.

Engineering-wise, it can carry about 2 or 3 sacks of potatoes, it moves about as fast as you can walk backwards, and if it was a UFC fighter it’d weigh in as a flyweight.

The obvious use-case for such a machine is as a factory worker, but even a cursory glance at the design rules this out almost instantly.

For crying out lout, it has humanesque hands and could be knocked over by a saloon door.

What’s it going to do, offer lemonade to the human workers and iced oil to the robots that were actually designed for factory work?

It’s obvious the Tesla Robot is meant to appeal to consumers. The design isn’t meant to be functional, it’s meant to imbue us with warm, fuzzy hope.

It’s just a lil’ bitty thing y’all, c’mon it can’t hurt anybody!

We’re supposed to look at it and imagine all the ways it could make our lives easier. According to Tesla’s website, it’ll only be capable of performing simple, boring, or unsafe tasks at first.

But the claims Musk makes about it’s future abilities are straight out of a Will Smith summer blockbuster.

Speaking at yesterday’s event, Musk said:

Can you talk to it and say, ‘please pick up that bolt and attach it to a car with that wrench,’ and it should be able to do that. ‘Please go to the store and get me the following groceries.’ That kind of thing. I think we can do that.

Okay, sure. Sure, okay.

That’s not how AI works. Just ask Boston Dynamics. While Tesla was busy contemplating the best mannequin design to use for its big ‘robot’ reveal, the makers of the incredibly popular Atlas and Spot robots have been making humanoid robots dance, perform parkour, and stun the world with backflips.

Here’s the rub: Boston Dynamics was founded in 1992 as an MIT spin-off. That puts Tesla about three decades behind the curve here.

But, just for the sake of argument, let’s pretend that Tesla has a robot as advanced as Atlas. There is a zero-percent chance that a Tesla robot running any AI system on the planet will be able to perform a function as simple as walking into my house and fixing me a cup of instant coffee.

I’m not even suggesting it do anything radical like carry my laundry up the stairs and put it away or – laughing out loud here – going to the grocery store and picking up a list of items.

There’s a huge difference between building a dedicated burger-flipping machine that’s bolted to the floor or a warehouse bot with wheels that follows a digital pathway and creating something that can navigate any building.

The sheer number of systems and hardware it would take for a completely autonomous machine to enter a novel space (like my house) and then locate the ingredients (instant coffee, water, my mug, a spoon) to make me a cup of coffee would be astronomical.

I’ve got a kid running around, toys everywhere, half the time we don’t put the coffee can back where it goes, my mug could literally be anywhere, and good luck finding a clean spoon!

(Related: Tesla misleads customers about self-driving features, senators allege in request for FTC probe, on CNBC)

Now, a person could easily handle this task. I can walk into your mansion, apartment, or house right now and figure out how to make a cup of coffee as long as you’ve got the right ingredients. But there isn’t a robot in the world that can do that safely without training on the environment previously.

And, sure, I suppose if you wanted to prove me wrong you could dedicate some AI development to a coffee-making robot that specialized in rummaging through cabinets.

But then, if I asked you to make it fix me a grilled-cheese sandwich you’d have to go completely back to the drawing board and design an entirely new AI system.

So what happens when the Tesla robot tries to do something it hasn’t been explicitly trained on? The exact same thing that happens when you ask Alexa or Siri to do something it can’t.

A humanoid machine designed to do menial tasks sounds like a really stupid idea. If it’s supposed to carry things, build it to look like a wheel-barrow. If it’s supposed to do dangerous jobs, design it for the actual job. This thing is only fit for work as a crash test dummy in its current iteration.

At the end of the day though, Tesla stocks are rising and that’s surely all that matters to Musk and his sycophantic supporters.

Musk makes promises. They don’t come true. He makes more promises.

It’s always a safe gamble to assume Musk’s just blowing smoke up our butts. Just glance at the above tweet.

But that was yesterday’s BS. It’s a new day, Musk’s got a new hustle, and it’s time for a new bet.

So here’s my new challenge to Musk: If Tesla can send a robot to my house (site unseen) next year, that can perform the same chores I currently ask my 4-year-old to do every evening before bedtime, I’ll roll it up in a giant tortilla and eat it with ghost pepper hot sauce.

Italian dude gets a QR code tattoo to prove he’s covid-free

Smartphones always seem to run out of juice at the worst time. Luckily though, there’s an easy fix: use an age-old body modification scarring ritual to sear a permanent information conduit into your soft skin.

Or at least that’s what one young Italian dude seems to believe. Earlier this week, tattoo artist Gabriele Pellerone tattooed a QR code of the client’s official EU covid certificate on their upper arm, Il Reggino reports.

Unfortunately, the original story doesn’t share the name of the brave pioneer who got the tattoo (which is why I’ve affectionately named him ‘Italian dude’) but you can see his genius idea actually works in Pellerone’s video:

I also tried it on my phone and can confirm it does work. However, the tattoo isn’t the actual QR code for the EU-wide covid certificate — AKA ‘green pass’ — which shows your vaccination status or negative test results.

Instead, it appears to be merely a QR code linking to the official one, which is way bigger and harder to render as a tattoo. But hey, that’s just nitpicking.

Credit: Gabriele Pellerone Tattoo
“Hey baby, wanna scan me?”

As far as tattoos go though… it’s not that bad?

Sure, Mr. Italian dude’s fresh new ink could become outdated — my official covid app might’ve actually generated a new QR code for me just the other day — but so what. A lot of tattoos get outdated anyway, so why not be a maverick and start something new?

Just imagine. Rather than going for the gazillionth feather turning into birds or another random ‘profound’ quote, people could start slapping permanent QR codes on their bodies, linking to their favorite YouTube videos or websites.

Who knows, maybe QR codes can even replace Chinese characters as the quintessential tattoo that you don’t understand — linking to the Wikipedia page on discus throw instead of your beloved Neopets.

Long story short, ‘Italian dude’ is creating a brave new world of interactive tattoos, and I want to live in it. Which is why I’m getting this inked across my back:

My rad new tattoo

h/t Dave Keating

Twitter’s plan to let users flag ‘misinformation’ only amplifies existing bias

Over the past year, we’ve seen how dramatically misinformation can impact the lives of people, communities and entire countries.

In a bid to better understand how misinformation spreads online, Twitter has started an experimental trial in Australia, the United States and South Korea, allowing users to flag content they deem misleading.

Users in these countries can now flag tweets as misinformation through the same process by which other harmful content is reported. When reporting a post there is an option to choose “it’s misleading” — which can then be further categorised as related to “politics”, “health” or “something else”.

According to Twitter, the platform won’t necessarily follow up on all flagged tweets, but will use the information to learn about misinformation trends.

Past research has suggested such “crowdsourced” approaches to reducing misinformation may be promising in highlighting untrustworthy sources online. That said, the usefulness of Twitter’s experiment will depend on the accuracy of users’ reports.

Twitter’s general policy describes a somewhat nuanced approach to moderating dubious posts, distinguishing between “unverified information”, “disputed claims” and “misleading claims”. A post’s “propensity for harm” determines whether it is flagged with a label or a warning, or is removed entirely.

In a 2020 blog post, Twitter said it categorized false or misleading content into three broad categories.

But the platform has not explicitly defined “misinformation” for users who will engage in the trial. So how will they know whether something is “misinformation”? And what will stop users from flagging content they simply disagree with?

Familiar information feels right

As individuals, what we consider to be “true” and “reliable” can be driven by subtle cognitive biases. The more you hear certain information repeated, the more familiar it will feel. In turn, this feeling of familiarity tends to be taken as a sign of truth.

Even “deep thinkers” aren’t immune to this cognitive bias. As such, repeated exposure to certain ideas may get in the way of our ability to detect misleading content. Even if an idea is misleading, if it’s familiar enough it may still pass the test.

In direct contrast, content that is unfamiliar or difficult to process — but highly valid — may be incorrectly flagged as misinformation.

The social dilemma

Another challenge is a social one. Repeated exposure to information can also convey a social consensus, wherein our own attitudes and behaviors are shaped by what others think.

Group identity influences what information we think is factual. We think something is more “true” when it’s associated with our own group and comes from an in-group member (as opposed to an out-group member).

Research has also shown we are inclined to look for evidence that supports our existing beliefs. This raises questions about the efficacy of Twitter’s user-led experiment. Will users who participate really be capturing false information, or simply reporting content that goes against their beliefs?

More strategically, there are social and political actors who deliberately try to downplay certain views of the world. Twitter’s misinformation experiment could be abused by well-resourced and motivated identity entrepreneurs.

Twitter has added an option to report ‘misleading’ content for users in the US, Australia and South Korea.

How to take a more balanced approach

So how can users increase their chances of effectively detecting misinformation? One way is to take a consumer-minded approach. When we make purchases as consumers, we often compare products. We should do this with information, too.

Searching laterally”, or comparing different sources of information, helps us better discern what is true or false. This is the kind of approach a fact-checker would take, and it’s often more effective than sticking with a single source of information.

At the supermarket we often look beyond the packaging and read a product’s ingredients to make sure we buy what’s best for us. Similarly, there are many new and interesting ways to learn about disinformation tactics intended to mislead us online.

One example is Bad News, a free online game and media literacy tool which researchers found could “confer psychological resistance against common online misinformation strategies”.

There is also evidence that people who think of themselves as concerned citizens with civic duties are more likely to weigh evidence in a balanced way. In an online setting, this kind of mindset may leave people better placed to identify and flag misinformation.

Leaving the hard work to others

We know from research that thinking about accuracy or the possible presence of misinformation in a space can reduce some of our cognitive biases. So actively thinking about accuracy when engaging online is a good thing. But what happens when I know someone else is onto it?

The behavioral sciences and game theory tell us people may be less inclined to make an effort themselves if they feel like they can free-ride on the effort of others. Even armchair activism may be reduced if there is a view misinformation is being solved.

Worse still, this belief may lead people to trust information more easily. In Twitter’s case, the misinformation-flagging initiative may lead some users to think any content they come across is likely true.

Much to learn from these data

As countries engage in vaccine rollouts, misinformation poses a significant threat to public health. Beyond the pandemic, misinformation about climate change and political issues continues to present concerns for the health of our environment and our democracies.

Despite the many factors that influence how individuals identify misleading information, there is still much to be learned from how large groups come to identify what seems misleading.

Such data, if made available in some capacity, have great potential to benefit the science of misinformation. And combined with moderation and objective fact-checking approaches, it might even help the platform mitigate the spread of misinformation.The Conversation

The Conversation

Eryn Newman, Senior Lecturer, Research School of Psychology, Australian National University and Kate Reynolds, Professor, Research School of Psychology, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tesla’s AI day was really just a job fair 

On Thursday Tesla held Tesla AI day. If you were looking for a great big product showcase by the automaker, you’d be left disappointed. Beyond anything else, it was a job fair. 

Remember those days where you turn up to some event, put on a nametag, and talk to people sitting at tables about why you should work for their company? Well, Tesla did just this following a bunch of presentations. 

Elon Musk admitted last month on Twitter:

“Convincing the best AI talent to join Tesla is the sole goal.”

An industry besieged by challenges 

The last year hasn’t been great for auto Original Equipment Manufacturers (OEMs). They currently face semi-conductor, rubber, and metal shortages and the impact of factory shutdowns during COVID-19 lockdown periods. Then there’s the bigger problem of a lack of talent — suitably trained professionals to design, build, and test the cars and other vehicles of the future. 

Today’s cars are effectively data centers on wheels and require workers with deep expertise in AI, robotics, and a passion for cars. Yet, auto OEMs like Tesla are struggling to attract and retain talent.

Grads don’t really want to work for most auto OEMs

For auto OEMs looking for talent, they should be worried. Employer branding specialist Universum released its 2020 list of most attractive employers for US students. In a list of 100 companies for computer science grads, here’s how car companies fared:

  • Tesla — 5
  • BMW — 51
  • Toyota — 60
  • Rolls Royce — 62
  • Daimler/ Mercedes Benz — 86
  • VW — 93

 Ford and GM are noticeably absent. Even worse, grads would rather work at Uber (ranked at 35) and Lyft (ranked at 39.)

There’s a huge skills shortage

In 2019, research by Boston Consulting Group and Detroit Mobility Lab predicted that the US mobility industry would need as many as 30,000 additional engineers with advanced-level skills to work on self-driving and electric cars and smart-infrastructure innovations over the next decade.

Other emerging forms of mobility, including autonomous trucks and drones, could push the number of new positions even higher.

Specifically, auto engineers need to be “cross-functional ‘tinkerers,’ who have a strong foundation in mathematics and physics; deep skills in artificial intelligence, machine learning, robotics, data sciences, and software; and a passion for cars.”

staff shortages in the auto industry
Auto OEMs are facing a lack of talent
staff shortages in the auto industry

It’s no surprise that auto companies frequently acquire startups to access their talent pool. Pretty much every role is embedded with technology.

Even those working the factory floor are deploying tech such as machine learning, IoT sensors, predictive analytics, robotics, and AR to construct vehicles. 

Aware of this, Tesla introduced a prototype of their very own general-purpose, bi-pedal, humanoid robot that can take over unsafe, repetitive, and boring tasks. The robot will use the same computer chip and eight cameras as Tesla vehicles. It will, in the future, carry 20 kgs and move as fast as 8 kilometres per hour.

Musk said:

Can you talk to it and say, ‘please pick up that bolt and attach it to a car with that wrench,’ and it should be able to do that. ‘Please go to the store and get me the following groceries.’ That kind of thing. I think we can do that.

We can likely see the robot working in Tesla factories in the future. But it’ll be a long time before we see a fully robotic workforce — Tesla would need to hire enough suitably skilled people to be able to design, build, and train them first. 

Bored mathematicians just calculated pi to 62.8 trillion digits

Swiss researchers at the University of Applied Sciences Graubünden this week claimed a new world record for calculating the number of digits of pi – a staggering 62.8 trillion figures. By my estimate, if these digits were printed out they would fill every book in the British Library ten times over. The researchers’ feat of arithmetic took 108 days and 9 hours to complete, and dwarfs the