The European Union revealed a plan on Wednesday to compete with the growth of tech conglomerates in the U.S., Asia and elsewhere — to restore “technological sovereignty” in Europe. The plan would place tougher regulations on tech giants, ignite more spending for the European tech sector, and require testing and certification for critical AI systems in health, policing and transportation.
Thomas Jungbauer, professor of strategy and business economics at the SC Johnson College of Business at Cornell University, studies tech firms, entrepreneurship and the economy. He says the EU is misguided in its efforts to compete with big American and Asian tech companies, and would be better served by embracing them, and building a “tech-savvy society” where those companies can thrive.
“How does Europe deal with the fact that they have fallen behind in the information technology race? Network effects and technological factors are responsible for many of the markets in the tech and sharing economy to be ‘winner-takes-most’ scenarios, that is markets in which a big firm dominates with other smaller players serving niche needs. Examples are Amazon, Facebook, Google, UBER, AirBnB, etc. For this reason, it is highly unlikely that Europe has any chance to create meaningful competitors in these industries, no matter the resources they pour into it —unless you erect protective barriers such as the Chinese did for Baidu or WeChat, which will not be done in Europe.
“But in today’s age it is questionable whether the nationality of business giants in the Western world plays a crucial role. Does Google serve the U.S. public, Samsung the Korean one? In my opinion, it is more important to create and foster a society in Europe that makes these tech giants strongly desire to have a strong foothold in Europe, while the location of their headquarters is of secondary importance. In other words, rather than creating competitors, make the market an even more important one for existing tech giants. This can be done by complementary innovations, electronic government, etc. In other words, by fostering a tech-savvy society.”
Joseph Halpern, professor of computer science at Cornell University, studies artificial intelligence and ethics, and spoke last week at the American Association for the Advancement of Science Annual Meeting about the ethics of machines making decisions. He says an EU plan to independently test and certify critical AI systems is a positive step in an era with so many ethical concerns.
“Currently, there are few downsides to a company if software they put out suffers from flaws. If software I use gets hacked because poor security procedures are in place, the company that designed the software is typically not punished. When it comes to highly critical applications, it becomes all the more important both to test and certify products, and to hold the companies that design the product to a high standard.
“Many consumer products get UL certified; it would be good to have analogous certification for software. With the modern AI software, there is also an ethical issue: for example, even if facial recognition software works correctly, is it appropriate to deploy it? I see the EU initiative as a useful first step in addressing these issues.”