Plenary Talks

  1. Mr. Hiroaki Okuchi (Toyota Motor Corporation)
  2. Prof. Erik Hollnagel (University of Southern Denmark)
  3. Dr. Chieko Asakawa (IBM)

Toyota’s Research into the Interaction between Automated Driving and Humans
~ The Mobility Teammate Concept ~

Thursday, September 1, 2016, 10:50-11:50

Mr. Hiroaki Okuchi

Mr. Hiroaki Okuchi

Managing Officer
Toyota Motor Corporation

Recently, automated driving with AI has been in the spotlight. Toyota Motor Corporation (Toyota) has developed automated driving technology since the 1990s, and thinks that the coordination between the vehicle and the human is essential. In this context, Toyota announced the Mobility Teammate Concept last October. It represents Toyota’s development goal that “In order to have all people move freely, smoothly and safely, driver and vehicle should share the same goal and build a partnership by supporting each other.”

To achieve this purpose, we have studied the ideal interaction between the driver and automated driving as well as between automated driving and society from various aspects such as Medicine, Psychology, and Engineering.

Crashes caused by human errors would be reduced by automated driving technology, but other problems will emerge. In this presentation, we clarify the challenges in each automation level (as defined by NHTSA*) and propose the direction of necessary next steps. NHTSA defined the 4 different levels for automated driving systems. Especially for the level 2 and 3 automation, the driver may be confused about his responsibility for monitoring roadways and performing safe operations. The analysis of situations of take over requests (TOR) and driver behaviors at TOR is an urgent topic, which the automotive industry must address. Since different domains such as aviation and railway have studied these types of human characteristics, automotive HMI with driver-system coordination must be considered. Furthermore, we have research themes to investigate discomfort and motion sickness as well as social acceptability for smooth traffic flow in automated driving. In addition, our goal is to aim for freedom of movement for all people; we would like to increase the chance for older drivers to drive vehicles by eliminating unsafe situations for them and then contribute the revitalization of aged society. For this purpose, research of medical emergency detection technology for the driver is essential which necessitates collaboration with medical institutes.

When fully automated driving (NHTSA level 4) is available, a driver’s in-vehicle activity will change significantly. HMI needs to consider totally different interactions from the current assumptions. New HMI will need to play a role not only as the function of driving support information, but also as communication partner of the driver.

Automated driving has high expectations to eliminate crashes, to resolve traffic jams, to support driving with older drivers, and/or to resolve other social problems. On the other hand, we must introduce products which have unique value and that maintain the excitement for customers such as “Fun to Drive”.

The challenges of automated driving will be more diverse along with social change. Toyota would like to create a future society with automated driving with a new approach and collaboration with the other fields. We appreciate comments from all academic society members.

* National Highway Traffic and Safety Administration, an agency of the Executive Branch of the U.S. government, part of the Department of Transportation.

Short Biography of Speaker

  • Master’s degree in engineering, Nagoya University, 1988
  • Apr. 1988 Joined Denso Corporation
  • Jan. 2009 Director, System Control Components Engineering Division 1
  • Jun. 2010 Director, Head of Business Unit, System Control Components Business Unit
  • Jun. 2013 Executive director
  • Apr. 2015 Managing officer, Toyota Motor Corporation
  • Apr. 2015 Deputy chief officer, R&D Group
  • Apr. 2016 Chief officer, Frontier Research Center
  • Apr. 2016 Advanced R&D and Engineering Company
Key non-TMC posts
  • Audit & Supervisory Board member, Jeco Co., Ltd. (June 2016-)

Being safe in an unsafe world – The practical side of resilience engineering

Thursday, September 1, 2016, 13:00-14:00

Prof. Erik Hollnagel

Prof. Erik Hollnagel

University of Southern Denmark

Safety is usually defined as a condition where as few things as possible go wrong. Safety efforts therefore focus on the identification and reduction of risk and harm and safety is measured by the number of cases where something has failed, resulting in accidents and incidents. This traditional definition of safety has been called Safety-I. The main shortcoming is that the management of safety is based on evidence from random snapshots of failed system states. Resilience engineering argues that safety should be viewed from a different perspective, with emphasis on things that go well. According to this definition, called Safety-II, a system is safe if as much as possible goes well. Safety management and the understanding of safety must therefore be based on a systematic understanding of how performance succeeds, rather than on how it fails. The purpose of safety efforts is similarly to ensure that things go well rather than to prevent them from going badly.

Safety-II has demonstrated that things usually go well because people at work, on all levels of an organisation, are able to adjust what they do to the existing conditions. Such adjustments are necessary because the actual working conditions always differ from the expected conditions. Put differently, work-as-done (WAD) will always be different from work-as-imagined (WAI). Instead of trying to eliminate this difference by requiring that WAD complies with WAI, resilience engineering looks at the four abilities or potentials that are necessary for an organisation to perform resiliently. These are the potentials to respond, to monitor, to learn, and to anticipate. Only by focusing on these potentials and by providing ways and means to improve them, will it be possible for organisations to be safe in an unsafe world.

Short Biography of Speaker

Erik Hollnagel is Professor at the Institute of Regional Health Research, University of Southern Denmark (DK), Chief Consultant at the Centre for Quality, Region of Southern Denmark, Adjunct Professor at Central Queensland University (Australia), Visiting Professor at the Centre for Healthcare Resilience and Implementation Science, Macquarie University (Australia), and Professor Emeritus at the Department of Computer Science, University of Linköping (S). He has through his career worked at universities, research centres, and industries in several countries and with problems from many domains including nuclear power generation, aerospace and aviation, software engineering, land-based traffic, and healthcare.

His professional interests include industrial safety, resilience engineering, patient safety, accident investigation, and modelling large-scale socio-technical systems. He has published widely and is the author/editor of 22 books, including five books on resilience engineering, as well as a large number of papers and book chapters. The latest titles, from Ashgate, are “Safety-I and Safety-II: The past and future of safety management”, “Resilient Health Care Vol 1 & 2”, and “FRAM – the Functional Resonance Analysis Method”. Erik also coordinates the Resilient Health Care net (www.resilienthealthcare.net) and the FRAMily (www.functionalresonance.com).

Making the real-world accessible

Friday, September 2, 2016, 13:30-14:30

Dr. Chieko Asakawa

IBM Fellow
IBM Research - Tokyo

Can you imagine how visually impaired people like me manage ourselves in the real world? Can you imagine how we interact with others, how we move around unfamiliar places, and how we recognize things surrounding us? There are so many challenges to live in society without vision. We call this the real-world accessibility challenge. Now, combinations of sensors and cognitive computing technologies are offering new ways for the blind to more meaningfully interact with the world, and realize a Cognitive Assistant for real-world accessibility. Our Cognitive Assistant will augment the missing or weakened abilities of people with disabilities by helping them interact, navigate, and understand the surrounding world through the power of cognitive computing.

In this talk, I will first review the history of information accessibility, showing how technologies have been helping people with visual impairments increase the information resources and improve the quality of their lives, starting from Braille digitalization and voice-based Web access. I will then give an overview of the many challenges around real-world accessibility. Real-world accessibility requires the most advanced sensing technologies, recognition engines, and machine-learning technologies to understand the surrounding world and to convert the data into a non-visual medium such as a voice. We are working in open collaboration with Carnegie Mellon University.

I will demonstrate some of the prototype applications that we are currently working on for smartphones. Beacon-based navigation requires installing Bluetooth beacons in our environment but it allows us to achieve sufficient accuracy for non-visual navigation. A vision-based shopping assistant does not require us to modify the environment. It only requires collecting images of the environment to create a 3D model, then we localize the user by comparing what the smartphone camera sees with this 3D model. We will also demonstrate a facial recognition system that also looks for emotions and also recognizes nonhuman objects. In my conclusion I will summarize research areas needed to realize the era of real-world accessibility through open collaboration.

Short Biography of Speaker

Chieko Asakawa has been instrumental in furthering accessibility research and development for three decades. By challenging traditional thinking on how the visually impaired use technology, she has explored solutions to improve Web accessibility and usability for the visually impaired and others with special needs. Series of pioneering technologies generated under Chieko's leadership significantly contributed in advancing Web accessibility, including groundbreaking work in digital Braille and voice browser. Today, Chieko is focusing on advancing cognitive assistant research to help the blind regain information by augmenting missing or weakened abilities in the real world. She is a member of the Association for Computing Machinery (ACM), the Information Processing Society of Japan, and IBM Academy of Technology. She was inducted into the Women in Technology International (WITI) Hall of Fame in 2003. Chieko was appointed to IBM Fellow in 2009, IBM's most prestigious technical honor. In 2013, the government of Japan awarded the Medal of Honor with Purple Ribbon to Chieko for her outstanding contributions to accessibility research, including the development of the voice browser for the visually impaired.

Site Menu

Important Dates