➤ Artificial intelligence tools can be risky depending on the accuracy of the data fed.
➤
Microsoft Research Asia, the research arm of Microsoft Corp. in the Asia Pacific region, was founded in 1998 in Beijing. With more than 200 scientists and 300 visiting scholars and students, one of its focuses has been artificial intelligence.
However, when it comes to AI application, the technology can be risky, Jason Tsao, Greater China AI and Area Transformation Lead at Microsoft, tells S&P Global Market Intelligence, adding that companies need to make sure AI technology can be explained to regulators in the event of an inconsistency.
S&P Global Market Intelligence: What is Microsoft Research working on in the AI field?
Jason Tsao, Greater China AI and Area Transformation Lead, Microsoft |
Jason Tsao: Microsoft Research, where a lot of leading researchers of Chinese AI companies including SenseTime group Ltd and Alibaba Group Holding Ltd. are from, has been working on simulating human capabilities in language processing and image recognition. However, there are still risks related to what type of data is used and its accuracy. For example, if a listed company was using AI to produce a financial statement, the U.S. regulator can challenge the company if anything goes wrong. The company will need to explain how the AI system produced wrong numbers. This is a technological challenge for us.
When is data biased?
The data used to train machines can be biased. The New York Times reported earlier that some AI systems can recognize Caucasian males better than African American females. This is mainly because researchers tend to use more Caucasian data and more male data to train machines. Secondly, researchers may not be completely honest with users when deploying the technologies.
When is data inaccurate?
It can be risky if data is gathered for background purposes from facial recognition technology, like in airports, and the technology mistakenly assigns someone to a wrong identity. That person will not know. Microsoft Research is working on this issue.
How does Microsoft Research solve problems like this?
We have created an internal committee to help us address AI-related ethical problems such as privacy and security.
Do you think AI will replace humans in the future?
AI cannot replace us in the foreseeable future. We have AI tools to produce summaries of research papers for developers, for example. However, it is people who pick the papers they think are most important. It is humans that analyze and make decisions.