main menu
サイト内検索
ログイン
ユーザ名:

パスワード:



パスワード紛失

新規登録
OpenIDログイン

OpenIDを入力

mixi Yahoo! JAPAN Google BIGLOBE はてな livedoor エキサイト docomo ID

ツッCOM

切り抜き詳細

毎日デイリーニューズ/2020/1/14 20:10
http://mainichi.jp//mainichi.jp/english/articles/20200114/p2a/00m/0na/016000c

Humanities-science cooperation essential in AI age

Artificial intelligence (AI) technology is advancing rapidly. Since it shook the world in 2015 when AI beat a professional Go board game player for the first time, the technology has been applied to computer games, home appliances, medicine and consumer services. AI will likely advance even further in the 2020s and greatly change society.
AI improves itself through deep learning, where it finds commonalities from a vast amount of data. In Japan, fifth-generation (5G) wireless technology will be fully introduced in spring, which will allow for the accumulation of data at each and every event.
Society has especially high hopes for AI in the advancement of self-driving technology. Auto-driving AI assesses ever-changing driving conditions and can operate the vehicle. 5G technology will also help make telesurgery possible, where doctors maneuver robot arms with help from AI and operate on patients on remote islands.
AI is also a promising part of a new workforce. Its abilities in tasks such as diagnosing cancer and other diseases using imagining systems -- judiciary-related work that requires references from massive volumes of documents and financial transactions where a one-second delay could cause a huge loss -- exceed those of humans.
Consequently, AI will replace some human workers. Those with high-paying jobs and white-collar workers are no exception. A study found that about half of the working population in the United States currently have jobs that are replaceable by AI.
At the same time, no legal systems have been established regarding who will take liability in cases where AI makes mistakes in self-driving vehicles or telesurgery. Will society accept the judgement that AI makes in a case where a driverless vehicle crashes head-on into a car carrying an elderly person as a result of avoiding a child who suddenly ran in front of the AI-operated vehicle?
Researchers are working on having AI create music and novels, but how to manage copyrights for artwork shaped by AI remains unclear. Cultural property churned out by AI could hinder humans' creative activities in the near future.
In his book, "Life 3.0," Swedish-American physicist Max Tegmark points out that AI will offer humans both great opportunities and difficult challenges. He argues that while AI will create immense benefits for us, it could also cause irreversible consequences depending on how the technology is used, and that it is necessary to formulate regulations and social systems to guarantee safe and beneficial AI before it's too late.
The epitome of the difficult challenges that AI could pose would be its applications to military use. It is believed that military powers such as the U.S., China and Israel have already succeeded in developing AI-operated unmanned weapons that can identify enemies and give the green light to an attack. However, limited international dialogue has been held on regulations over these technologies.
The U.S. and China stand out in research and development of AI. Experts warn of a possibility where products and services created by American IT giants Google, Apple, Facebook and Amazon -- after recruiting the best and brightest minds and pouring massive resources into these services -- manipulate consumers' minds and behavior. China uses AI not only as a means to achieve military hegemony but also to control its own citizens.
In the 20th century, the main driving force for technological innovations was wars. The development of radars, space technologies and the internet are some of the fruits borne by military research.
Scientists and engineers were given goals, and they worked on their projects with their focus solely on "how to make them happen." It is only when their achievements cause a calamity that they realize the seriousness of the effects of their work. A classic example of this is the development of atomic bombs by the U.S. government.
To avoid AI going down the same path, we need to face the dark side of technology and commit to human-based principles. With consideration to the wider use of AI, it is essential to consider perspectives from humanities and social science studies such as philosophy, ethics, law and psychology, not just engineering, computing science and other fields that are related to AI development.
In the years ahead, Japan will face an era of rapid aging, a falling birth rate and a declining population. In metropolises, the number of seniors in need of nursing care will surge and labor shortages in the medical and nursing fields is expected to worsen. In rural areas, meanwhile, it is feared that depopulation will affect people's access to shops and medical and administrative services.
When presenting solutions using AI technologies to challenges unique to Japan, apart from being realistic they must also serve the public's interests. We want to see AI's steady use in people's livelihoods, and the technology become rooted in our mature society as beneficial infrastructure.
Of course, we need to stay alert for unwelcome advancements. Japan's Basic Act on Science and Technology, which serves as the basis for the country's policies regarding science and technology, is slated for revision this year and humanities studies will be added to the subjects under the amended law. A multifaceted approach to the values of science and technology becomes possible when humanities and science experts work together and think about the principles that should be protected for mankind. This way, financial support could be enhanced. We hope that humanities-science cooperation on AI technologies can serve as a model for other fields.


コメント一覧


 

 

©太陽と風と水, 2011/ info@3coco.org  本サイトについて