In a real environment, various sounds are present and we usually here mixture of them. For example, even if you try to use the speech recognition function of PC, not only your voice but sound from TV close to you may be inputed togather. Even if you try to record the piano performance of your daughter at a concert, noisy sneezing of a man next to your seat may be recorded together. Aiming to recognize only target signal from mixutre sound, and edit or modify it as you like, we have developed a technique to fast separate mixutre sound into each of source with multiple microphones.
A method is proposed for preventing privacy invasion through unintentional capture of facial images. Prevention methods such as covering the face and painting particular patterns on the face are effective but hinder face-to-face communication. The proposed method overcomes this problem through the use of a device worn on the face that transmits near-infrared signals that are picked up by camera image sensors, which makes the face in captured images undetectable. The device is similar in appearance to a pair of eyeglasses, and the signals cannot be seen by the human eye, so face-to-face communication is not hindered.
Connecting Society and Academia with Data LODAC: Building the Open Social Semantic Web Platform for Academic Resources
The aim of the project is to provide an open and flexible platform for academic resources with Linked Open Data (LOD). LOD is the emerging technology which can realize the huge network of data just as the web realizes the huge network for documents. We are currently developing various LOD data silos, e.g., museum collections and biological species information. We are also developing some applications such as Yokohama Art Spot which is a mashup of local information from different data silos.
Tokyo Virtual Living Lab: Smart City Simulation An Experimental Space for Conducting Controlled Driving Behavior Studies based on a Multiuser Networked 3D Virtual Environment and the Scenario Markup Language
Authors: Kugamoorthy Gajananan*, Helmut Prendinger*, Marc Miska**, Edward Chung** *National Institute of Informatics, **Smart Transport Research Centre, Queensland University of Technology Traffic congestion is a big economic and environmental problem. Therefore, traffic engineers try to understand the causes of traffic congestion. However, there exists no effective method to understand the origin of traffic congestion at the microscopic level: Who is the magical "first driver" in traffic congestion? How does the slowing of the "first driver" propagate to the driving behavior of following cars? Such questions about an important real-world problem constitute the key motivation for this research. Accordingly, we develop the following tools and techniques in collaboration with traffic engineers from Queensland Univ. of Technology, based on our original framework for social simulations in massively multiuser networked 3D environments. First, we designed the Scenario Markup Language (SML), a novel scripting language for specifying and orchestrating events in highly dynamic scenarios, such as traffic scenarios. SML is already used by our collaborators. To draw valid conclusions from driving behavior in the case of an accident, a large amount of reliable data is required. Therefore, second, we developed a new technique that allows the experimenter to create time-critical events (e.g. an accident). This ensures the reproducibility of the drivers' experience and hence large-scale data collection becomes possible.
Helmut Prendinger Kugamoorthy Gajananan1, Marc Miska1, Edward Chung1 1:Smart Transport Research Centre, Queensland University of Technology
Tokyo Virtual Living Lab: Smart City Simulation iCO2: Multi-User Eco-Driving Training Environment based on Distributed Constraint Optimization
Authors: Marconi Madruga, Helmut Prendinger National Institute of Informatics Intelligent Transport Systems (ITS) are advanced applications that play a major role in the future of mobility and transportation. They aim to integrate existing transportation infrastructure with communication networks in to reduce congestion and travel time and thus reduce environmental impact. Before implementing ITS in the real world, there is a need for evaluating the systems in a risk free environment with rich and reliable driving behavior data. We propose crowdsourcing in a virtual environment by providing an incentive for the user (driver) to drive in an adequate way for a desirable period of time. Taking eco-driving as an example task, we developed a novel incentive mechanism that automatically adapts the difficulty level for eco-driving, so that drivers feel challenged over extended periods of time, and hence create important behavior data for the traffic engineers. The research challenge is to determine the optimal challenge-level for up to 300 simultaneous drivers in a shared simulation space.
Tokyo Virtual Living Lab: Smart City Simulation Online Parameter Estimation of Microscopic Car-following Models
Authors: Reinaert Molenaar1, Helmut Prendinger2, Hans van Lint3, Bart De Schutter1 1,3Delft University of Technology, Delft Center for Systems and Control, 2National Institute of Informatics, 3Delft University of Technology, Dept. of Transport and Planning Traffic congestion is a serious problem in densely populated areas, costing a lot of time and money for the people involved. Development of new approaches to solve these problems using intelligent transportation system (ITS) is an important subject in transportation research. Simulated environments can be used to test the influence of these new approaches on human driving behavior. For that, traffic simulators are used to generate ambient traffic and driving simulators allow interactive driving of human drivers. For experiments in this simulated environment, the realism of the ambient traffic is of vital importance, as it improves the reliability of the results from testing. Drivers should react to changes in the simulated environment as they would to changes in a real-life driving situation. Therefore, the goal of this project is to develop a learning functionality for a traffic simulator, to improve realistic driving behavior of the simulated traffic. This is achieved by observing the driving behavior of a human participant in the simulated environment. This information is used to improve the models used for generating simulated traffic, giving them more human-like driving behavior. This increases the realism of the simulated traffic, which improves the reliability of the simulator. The improved reliability of the simulator helps to diagnose the effects of the new applications of ITS.
Helmut Prendinger Reinaert Molenaar1, Hans van Lint2, Bart De Schutter1 1,2:Delft University of Technology, Delft Center for Systems and Control, 2:Delft University of Technology, Dept. of Transport and Planning
Health: Advanced Techniques for better Training and Information Provision Intelligent Biohazard Training with 3D Interaction and Real-time Task Recognition
Authors: Nahum Alvarez*, Mark Cavazza†, Shuji Fujimoto‡, Mika Shigematsu§Helmut Prendinger **National Institute of Informatics, †School of Computing, Teeside University, ‡Faculty of Medical Sciences, Kyushu University, §National Institute of Infectious Diseases Three-dimensional (3D) environments offer an ideal setting to develop intelligent training applications; yet, their ability to support complex procedures depends on the appropriate integration of knowledge-based techniques and natural interaction. In this paper, we describe the implementation of an intelligent rehearsal system for biohazard laboratory procedures, based on the real-time instantiation of task models from the trainee's actions. A virtual biohazard laboratory has been recreated using the Unity3D engine, in which users interact with laboratory objects using hand gestures through a Kinect device. Realistic behaviour for objects is supported by the implementation of a relevant subset of common sense and physics knowledge. User interaction with objects leads to the recognition of specific actions, which are used to progressively instantiate a task-based representation of biohazard procedures. The dynamics of this instantiation process supports trainee evaluation as well as real-time assistance. This system is designed primarily as a rehearsal system providing real-time advice and supporting user performance evaluation. We present results from onsite testing with medical students, as well as detailed examples illustrating error detection and recovery.
Helmut Prendinger Nahum Alvarez, Mark Cavazza1, Shuji Fujimoto2, Mika Shigematsu3 1:School of Computing, Teeside University, 2:Faculty of Medical Sciences, Kyushu University, 3:National Institute of Infectious Diseases
Health: Advanced Techniques for better Training and Information Provision Automated Text to Dialogue Generation for Better Understanding Clinical Guidelines
Authors: Pascal Kuyten1, Helmut Prendinger2, Paul Piwek3, Svetlana Stoyanchev4, Mitsuru Ishizuka1 1The University of Tokyo, 2National Institute of Informatics, 3Open University, UK, 4Stony Brook University, NY Information in today's society is rapidly growing, yet more information does not always lead to more or better knowledge. Therefore, scientists try to find ways to share, understand, and remember important information. Take clinical guidelines as an example: Can we help the content author to improve presentation of the guideline, so that even laypersons and elderly can easily understand and remember them? In collaboration with the Open University in Milton Keynes, and Stony Brook University in New York, we have created tools and techniques that can analyze, rewrite and visualize information automatically. First, we designed a high-level discourse parser (HILDA) that can identify the basic units of text and how they relate (discourse). Secondly, the Open University designed a system that uses such discourse to rewrite the text into a coherent dialogue (CODA). We will show results on whether information displayed in a 3D environment can help receivers of information to more thoroughly comprehend the information.
Helmut Prendinger Pascal Kuyten1, Paul Piwek2, Svetlana Stoyanchev3, Mitsuru Ishizuka1 1:The University of Tokyo, 2:Open University, UK, 3:Stony Brook University, NY
NoE on Social Project Management COMMUNIGRAM-NET
COMMUNIGRAM-NET is a Network of Excellence (NoE), which aims at integrating research and best practices that is currently conducted by leading research groups and educational organisations in the field of Social Project Management. Collaboration in the field of social project management, collective intelligence and knowledge creation is in the core of the COMMUNIGRAM-NET Network of Excellence.
Andres Frederic Kenneth Brown, Jarbas Lopes Cardoso, Fernando Ferri, William Grosky, Yoshiharu Hirabayashi, Rajkumar Kannan, Epaminondas Kapetanios, Asanee Kawatrakul, Tetsu Tanabe
Collective Intelligence based social project management CI-Communigram
CI-COMMUNIGRAM is a collective intelligence-based platform for doing projects to foster innovation, knowledge creation and sharing, productivity and personal engagement.
NII research cloud gunnii is a bare-metal cloud service by which researchers can deploy research environments on demand, easily. They can enjoy the bare-metal stable performance in the cloud instead of putting up with the performance in the virtualized clouds.
Advanced ICT Center
Graph transformation that can propagate modifications bidirectionally, and its applications An integrated framework for developing well-behaved bidirectional model transformations and its applications
Model transformation in model-driven development plays an important role in formal treatment of development process. By composing larger transformation from smaller transformations in a systematic manner and propagating modification to the model in both direction (not only from source to destination but also backwards), the evolutional development process can be made robust. We demonstrate our tool for systematic development of well-behanved bidirectional model transformations, recent innovation and applications.
SIGVerse is an open software platform in order to design and investigate symbiosis society by human and robot. Users can design robot agents and throw them in a virtual environment to make embodied and social interaction experiment. Virtual robots also make interaction with real human through immersive interfaces. A worldwide robot competition RoboCup@Home adopted this simulator to realize a simulation competition on collaborative service robots. Please experience demonstration on interaction with cooperative intelligent robots.
Support for learning is required to suit the needs of individual students. That is what we need to measure and diagnose students' mastery status of pre-defined skills, thereby providing them with detailed information regarding their specific strengths and weaknesses. Cognitive diagnostic test systems are the possible ways to aid teachers to direct students to more individualized remediation and help students to make self-study efficiently. A demo of Japanese vocabulary test will be shown as a part of the research.
How to access large-scale video archive? Challenge to Bridging the Semantic Gap via Video Media Content Analysis
Video content-based retrieval is indispensable to access necessary information from broadcast videos or video archives in the internet. We are addressing video content-based retrieval for large-scale video archives via automatic extraction of video content information using video semantic analysis. This requires to solve so-called the bridging the semantic gap, which is known to be very challenging task, and we are tackling this issue using several techniques including image analysis, machine learning, and information retrieval. We will demonstrate our video search engine enabled by our research outcome.
Eyes Tell More Than Mice GLASE-IRUKA: Ostensive Interactive Image Retrieval System Based on EyeGaze
Understanding users' information needs and intentions is the biggest challenge in the successful information retrieval. Eye gaze has possibility to indicate such users needs and intention behind the query or often under the conscious, and can continuously feedback to the system without any burden of the users. Here we have a demo of an image retrieval system with flexible user interface and continuous relevance feedback by users eye-gaze.
Noriko Kando Viktors Garkavijs, Pawitra Chiravirakul, Tetsuo Ishikawa, Diana Krusteva, Lica Okamoto, Mayumi Toshima
Quantum information Quantum information using Bose-Einstein condensates
Many of us have heard of a quantum computer, but it is hard to imagine what this would really look like. Would it be something that would fit inside a laptop, or would it occupy an entire room like the first computers did? The answer to this is still unknown, because researchers across the world are trying many different approaches to try and build a quantum computer. We describe the various approaches to quantum computing, including ion traps, superconducting qubits, and quantum dots inside semiconductors. We also describe a new approach that we are working on, using Bose-Einstein condensates.
We introduce the Alpha version of "Qubit: The Quantum Computing game". This game is designed to crowd-source the optimization of quantum circuits. This problem can be translated to a puzzle problem that is amenable for release to the general public. Qubit will be released on Tablet devices and as a Web based game.
Light and matter exhibit both wave and particle properties in quantum physics. Particles in a coherent state oscillate with a same phase and frequency. Recently our group proposes a coherent computer to solve optimization problems using an injection-locked laser network. The poster will show some numerical simulation results and experimental results. Our other interest is the exciton-polariton condensates in a microcavity. Exciton-Polaritons are a composite particles of a photon and an exciton. The poster will explain some experimental results and potential applications.
Recent supercomputers compute 10^15 operations per second by executing hundreds of thousands processor cores in parallel for a single parallel application. However, system performance would be limited by the inter-core communication latency and power consumption. This limitation introduces difficulty in further speeding up of supercomputers. To break this limitation, we surprisingly propose to use random topology, and analysis results show that random topologies achieve good system scaling.
In automotive control systems, many and various types of ECUs (Electronic Control Units) are used and placed everywhere in automobiles. This causes serious problems such that the weight of the connection cable reaches tens of kilos, and thus the running fuel cost and the manufacturing cost are badly affected. This research project will develop a new centralized and dependable approach where many ECUs are contained in one chip using dependable Network-on-Chip architecture, with only sensors and actuators left in original places.
Software Enhanced Monitoring Self-adaptive Software for Smart Sensor Systems
Software-controllable smart sensor system can improve quality of sensory data for a long period. In this poster presentation, we introduce research topics related to the self-adaptive software for smart sensor system; 1) self-healing for sensory data failure recovery, 2) self-adaptive task allocation for shared smart sensor system, and 3) software development process for smart sensor system software.
How to develop self-adaptive system Software Development Process for Self-adaptive Control Systems
Software system has been used in various environment. In response to changes in environment, software should change its own structure and behavior to satisfy its requirements. Such capability is known as self-adaptiveness. Self-adaptive system should be able to (1) monitor changes in environment, (2) analyze and plan what changes are needed , and (3) execute the changes. In this poster presentation, research topics related to analyzing and designing such a self-adaptive system with an example of control systems.
The Top SE Project is a practical education program aiming to cultivate software engineers who have acquired highly advanced development techniques based on the concept, "intellectual manufacturing education based on science." The students experience application of learnt techniques to practical problems through their graduation studies, in addition to lectures provided by professionals from universities and companies. About 200 alumni are active in various fields.
GRACE Center is a world-leading software research center in NII engaged in research, education and practical work in alliances with research organizations in Japan and overseas and as part of industry-academia collaboration. GRACE center seeks to put in place the foundations of 21st century software, while developing world-class researchers and engineers who will go on to play central roles in the next generation.
Toward Efficient and Highly Quality Software Development State-of-the-art Technologies for Software Analysis, Testing, and Model Checking
Nowadays improving efficiency and reliability in software development is vital, since software is becoming complex. We propose support methods in various development stages such as software comprehension, testing, and verification of complex software behavior. The methods are based on state-based and mutation analysis in Ajax applications. Moreover, to derive sophisticated code with guarantee of correctness, we also improve Scala code derivation from proof of theorems. Further, for rigorous analysis of large and complex systems, we propose a method for planning proper refinement of formal specification.
GRACE Center provides edubase Cloud, Space for education environment for IT specialist, and Portal as a portal site aimed at continuously disseminating and developing good IT educational materials. These services aim at cultivating the leading IT specialists who have the ability to take the initiative in software development in companies and other entities.
enPiT is a national education project for cultivations of world-wide IT engineers with cutting edge technologies. In detail, there are four education courses: cloud computing, security, cyber physical systems and business application. We promote a national-wide educational network on the four disciplines with not only academia but also industrial people. The educational program mainly focuses on practical teaching methods such as project-based leaning and problem-based leaning.
As roles of software increase, compliance with "promises" such as laws and specifications becomes more significant but difficult. On the other hand, cooperation through "promises" is now common beyond organizations through web services and clouds. This presentation introduces our research on analysis and fulfillment of such "promises". We are tackling both of engineering (requirements analysis and formal methods) for development of software that satisfies "promises", and computing (autonomous cooperation and self-adaptation) of software that understands "promises".
We solve the known problem of elimination of unnecessary internal element construction as well as variable elimination in XML processing with XQuery without ignoring the issues of document order. The semantics of XQuery is context sensitive and requires preservation of document order. We propose, as far as we are aware, the first XQuery fusion that can deal with both the document order and the context of XQuery expressions.
Model transformations are a key element in the OMG's Model Driven Development agenda, providing a standard way to represent and transform software artifacts such as requirements, design models, program code, tests, configuration files, and documentation in software development. However, a source and target models of a transformation usually co-exist and evolve independently. How to propagate modifications correctly across models in different formats and guarantee system consistency remains an open problem. This project aims to solve this problem based on bidirectional model transformation. The success of the project would lead to a novel formal method for evolutionary software development, and a trusty tool for artifact synchronization.
Existing distributed parallel programming models (e.g., MapReduce) are widely used such as in indexing web pages, log analysis, machine learning and so on. But how to systematically develop and optimize parallel programs remains as a big challenge. We propose a high-level framework for systematically and easily developing MapReduce programs, making use of the program calculational theories. Efficient MapReduce programs can be automatically derived under some rules. Users can write efficient programs without caring about parallelism nor much knowledge of MapReduce-programming.
We propose a novel computation model for GPU. Known parallel computation models such as the PRAM model are not appropriate for evaluating GPU algorithms. Our model, called AGPU, abstracts the essence of current GPU architectures such as global and shared memory, memory coalescing and bank conflicts. We can therefore evaluate asymptotic behavior of GPU algorithms more accurately than known models and we can develop algorithms which are efficient on many real architectures.
We propose a new succinct de Bruijn graph representation. If the de Bruijn graph of k-mers in a DNA sequence of length N has m edges, it can be represented in 4m+ o(m) bits. This is much smaller than existing ones. The numbers of outgoing and incoming edges of a node are computed in constant time, and the outgoing and incoming edge with given label are found in constant time and O(k) time, respectively.
Lambda-Calculus and Type Theory TLCA Open Problem 20
This paper answers TLCA Open Problem 20, which is finding a type system that characterizes hereditary permutators. First this paper shows that there does not exist such a type system by showing that the set of hereditary permutators is not recursively enumerable. Secondly this paper gives a best-possible solution by providing a countably infinite set of types such that a term has every type in the set if and only if the term is a hereditary permutator.
The Cluster Newton Method (CNM) has proved to be very efficient at finding multiple solutions to underdetermined inverse problems. In the case of pharmacokinetics, underdetermined inverse problems are often given constraints to restrain the variety of solutions. In this presentation, we present an improvement on the CNM that utilizes the two parameters of the Beta distribution to find families of solutions instead of randomly spread out solutions. This allows for a much greater control of the variety of solutions that can be obtained with CNM as well as facilitates the task of obtaining pharmacological feasible parameters.
Philippe Gaudreau (MOU Internship Student, University of Alberta) Ken HayamiAkihiko Konagaya (Professor Department of Computational Intelligence and Systems Science Interdsciplinary Graduate School of Science and Engineering Tokyo Institute of Technology)
Least squares problems are fundamentally important issues arising in the field of sciences, engineering, industries etc. This study develops solvers well-designed for large ill-conditioned least squares problems. We show that the solvers give a solution of any least squares problems and are more powerful than previous solvers by computer experiment. Moreover, we present an application of the solvers to image reconstruction problems arising from electron microscopes in biology.
We propose a new exact distance querying method on large networks. Our method precomputes and stores pruned shortest-path trees to efficiently answer distance queries. Our experiments show that our method outperforms other state-of-the-art methods for various types of large-scale real-world networks.
Applying Theory (Mathematics) to optimize difficult problems in the real world. The travelling tournament problem – Applications to Japanese Professional Baseball Scheduling
We apply mathematical tools to solve some hard practical problems. For example, we try to create a distance-optimal scheduling for the traveling tournament problem, i.e, each team plays every other team twice, once at its home and once at away.
Adaptation in Computers Resilient Distributed Systems
Computing systems should be resilient in the sense that it does not be only robust to but also adaptive to changes in their execution environments, e.g., applications, network topologies, and devices. This work aims at proposing several approaches to enabling software components running on computing systems, in particular distributed systems, to be adaptive to such changes in a self-organized manner, like a cellular differentiation mechanism.
Can computer reason about law? PROLEG: Implementation of Ultimate Fact Theory in Civil Litigation by Logic Programming
In this presentation, we show an implantation of the ultimate fact theory in civil litigation by logic programming. The ultimate fact theory is a decision tool for a judge under incomplete information by attaching the burden of proof for each ultimate fact in civil code. We show correspondence between logic programming and ultimate fact theory and use it for an implementation of the theory by logic programming.
How to use large amount of information with diversity? Integrating Various Information with Semantics
It is easy to obtain large amount of various information nowadays.In order to use such information efficiently, we need to integrate it with semantics. In this presentation, we show semantic technologies for the problem.
After the 3.11 earthquake, many people have realized the importance of building resilient systems that can absorb shocks from unexpected incidents. In our research, we set out to establish a new challenging research discipline that we call "systems resilience", which provides a set of unified design principle for building resilient systems.
Katsumi Inoue Tenda Okimoto, Hei Chan, Nicolas Schwind, Tony Ribeiro
Nowadays, people are sharing mass of information on the Internet, and differences in languages become more and more crucial. In particular, while numbers of people speak Chinese, English, or Japanese, no machine translation systems for these languages cannot break communication barriers among ordinary people. We, therefore, are focusing on research for practical machine translation among these languages.
We aim to understanding what document and personal characteristics influence reading behavior by analyzing people's eye-movements. Using this information, we can discover what document characteristics cause unnecessary cognitive effort and allows us to transform documents to increase readability and legibility. Our research also helps us to gain cognitively-motivated linguistic insight and to refine user models that find applications in information recommendation systems.
Computer-Assisted Understanding of Mathematical Content Retrieval and Semantic Analysis of Mathematical Formulae
Mathematical expressions are one of the important means of scientific communication and used not only for numerical calculation and theorem proving but also for clarifying concept definitions and disambiguating formal operations. Based on this, our presentation introduces techniques to support understanding and utilization of mathematical knowledge based on the analysis of mathematical equations and their surrounding texts.
Akiko Aizawa Goran Topic, Minh-Quoc Nghiem, Giovanni Yoko Kristianto
When people read a scientific paper, they not only see one word after another, but also think over the "content" represented in the paper by associating their own knowledge or other research with the content, which leads to their "deep understanding of the paper". It is, however, not easy for people to repeat such work for a huge number of papers with a great diversity of "content". We are currently developing fundamental technologies for assisting this "deep understanding of the content of a paper".
Research Center for Knowledge Media and Content Science
We will introduce the NII grand challenge project known as "Todai Robot Project." This project aims to add a new dimension to the current information technology and bring a deeper understanding of human intelligence, by setting a concrete goal: development of a computer which is able to pass university entrance exams. We will show major difficulties and challenges we are facing, and introduce several promising approaches.
Common Toolkit for University Entrance Examination Solvers Compatible Components of Question Answering System and Entrance Examination Solver/Scorer Workflow
The "Can Robots Enter the University of Tokyo" project aims to create artificial intelligence software that can automatically answer the entrance examination. Creating such a complex software requires a large amount of time if made from scratch. We aim to provide a common toolkit for the software creation to save developers' time allowing them to concentrate on their interested tasks.
How Effectively Computers Search Information NII Testbeds and Community for Information access Technologies (NTCIR)
NTICR provides a large-scale re-usable common research infrastructure for innovative challenges in information access technologies. Its purpose is to leverage the researches in information access and to create new future values through running a workshop in an 18-month cycle, which has attracted more than 100-130 research groups internationally. NTCIR-10, its latest version, has tackled on 8 innovative research tasks – Cross-language link discovery, Search results diversification and intention mining and 1-click search in Web search, Math retrieval, Medical Natural Language Processing, Patent Machine Translation, Inference in text and its challenge to University Entrance Exam (with collaboration with Todai Robot Project), and Spoken Document Retrieval.
Can rays go through walls and pillars by advanced image processing? Towards light field processing based on (de)composition of visual information
When we would like to reuse inexpensive but narrow spaces for cultural activities such as plays, concerts and movies, our sight is often limited significantly by pillars and walls. However, in the near future, Japanese cities should be much more compact by reusing them effectively because of our population composition. We introduce advanced technologies of image processing for light field transmission beyond pillars and walls, which enable us to solve the visual problems.
High fidelity 3D modeling of real objects Photometric metric under unknown lighting for range image registration
Ｗe derive a new photometric metric for evaluating the correctness of a given rigid transformation aligning two overlapping range images captured under unknown, distant and general illumination. We estimate the surrounding illumination and albedo values of points of the two range images from the point correspondences induced by the input transformation. We then synthesize the color of both range images using albedo values transferred using the point correspondences to compute the photometric re-projection error. This way allows us to accurately register two range images by finding the transformation that minimizes the photometric re-projection error.
Visual attention extracted from video with auditory information Incorporating auditory information to compute a visual saliency map for video
Current visual saliency maps, which represent visual attention of a human being, are computed from an image or a video using only image features. Our attention, however, is drawn by not only visual information but also auditory information. We introduce our approach to computing a visual saliency map that uses auditory information in addition to image information.
When a human observer watches a video clip, where is he looking at? Can we guess? To what extent does it depend on the video, and to what extent does it depend on the viewer? We discuss our preliminary research on gaze prediction & analysis.
Efficient retrieval of similar data items A General Model of the Intrinsic Dimensionality of Data
We propose a framework for the characterization of data sets in data mining applications, in terms of their intrinsic dimensionality. Our model can be viewed as a generalization of the expansion dimension, which was originally proposed for the analysis of certain similarity search indices using the Euclidean distance metric. Here, we extend the original model to arbitrary distance distributions. We also provide a practical guide for estimating both local and global intrinsic dimensionality for certain distance metrics. The estimates of data complexity can subsequently be used in the design and analysis of efficient algorithms for data mining applications such as search, clustering, classification, and outlier detection.
Michael E. Houle Hisashi Kashima (U. Tokyo), Michael Nett (U. Tokyo, NII)
Efficient retrieval of similar data items Multi-Step k-Nearest Neighbor Search Using Intrinsic Dimension
Most existing solutions for similarity search fail in handling queries with respect to high-dimensional distance functions or adaptable distance functions. For such situations, multi-step search approaches have been proposed which consist of two stages: ﬁltering and reﬁnement. The ﬁltering stage of the state-of-the-art multi-step search algorithm of Seidl and Kriegel is known to produce the minimum number of candidates needed in order to guarantee a correct query result; however, it may still produce an unacceptably large number of candidates. We present a heuristic multi-step search algorithm that utilizes a measure of intrinsic dimension, the (generalized) expansion dimension, as the basis of an early termination condition. Experimental results show that our heuristic approach is able to obtain signiﬁcant improvements while losing very little in the accuracy of the query results.
Michael E. Houle Xiguo Ma (NJIT), Michael Nett (U. Tokyo, NII), Vincent Oria (NJIT) Note: NJIT = New Jersey Institute of Technology
Efficient retrieval of similar data items Rank-Based Similarity Search: Reducing the Dimensional Dependence
Virtually all known distance-based similarity search indices make use of some form of numerical constraints on similarity values for pruning and selection. The use of numerical constraints can lead to large variations in the numbers of objects examined in the execution of a query, making it difficult to control the execution costs. This presentation introduces a probabilistic data structure for similarity search, the rank cover tree (RCT), that entirely avoids the use of numerical constraints. The experimental results for the RCT, together with a probabilistic analysis, shows that purely combinatorial methods for similarity search are capable of meeting or exceeding the level of performance of state-of-the-art methods that make use of numerical constraints on distance values.
Michael E. Houle Michael Nett (U. Tokyo, NII)
How can you get reader impressions intuitively while searching books? Color extraction method for creating a book cover image reflecting reader impressions
The image on a book cover gives potential buyers not only an impression of the book's contents but also a clue for search and browsing before or after buying the book. We propose using a color extraction method as the first step in automatically creating book cover images that reflect readers' impressions. We constructed a database expressing the relationships between adjectives and colors and extracted colors from text such as sentences in the book and user reviews.
Fluorescence: Common Phenomena observed in Many Objects Modeling Reality based on Fluorescent Components
Fluorescence is a very common phenomenon observed in many objects, from natural gems and corals, to many kinds of paper we write on, and even our clothes. We show that the color appearance of such objects seen under different lighting can be represented as a linear combination of reflective and fluorescent components. The linear model enables us to effectively separate these two components using images taken under two different unknown illuminations. We also propose a novel technique called bispectral photometric stereo that makes an effective use of fluorescence for shape reconstruction.
How can we resolve A Method for Anonymizing User's Sensitive Information and for Detecting Revelations on Social Networking Sites
Sensitive information about a user can be disclosed by the user's social networking site (SNS). We have developed a method for creating anonymous fingerprints not only anonymize a user's sensitive information but also can be used to identify a person who has disclosed sensitive information about the user. Moreover, a fingerprint cannot be converted by a discloser into one that causes the algorithm to incorrectly identify the person who disclosed the information in almost cases. The algorithm was demonstrated by using it in an application for controlling the revelation of sensitive information on Facebook.
The security and privacy are important issues on modern society, as exemplified by personal information leakages and attacks on systems in recent years. Compared to other types of products and infrastructures, the technologies to enhance the security of information systems have not yet reached the stage of being adequate.This research aims to integrate security and privacy into software development methods to establish security software engineering technologies.
Disasters may destroy everything including communications infrastructures isolating people in the disaster-stricken areas. Recovery of these infrastructures is often prolonged which is not suitable for disastrous fast-responses. This work proposes practical deployments of on-site configured access networks for disaster recovery. Although infrastructures are definitely damaged right after the occurrence of disasters, battery-based mobile devices (smart phones, laptops, tablet PCs) still work for some extended times. These mobile devices automatically change their modes, from the infrastructure mode into the ad-hoc mode, establishing multihop access networks. These networks are extended until still aliveInternet gateways (IGWs) are reached providing Internet access to the victims. The proposed scheme requires no further equipments except commodity mobile devices which are ubiquitously available.
Software-Defined Networking (SDN) is an emerging network architecture in which the control functions are decoupled from the forwarding and data processing elements. Moreover, SDN defines an open programmable interface between those elements , e.g., using OpenFlow protocol. The programmability enables new efficient data forwarding mechanisms in terms of flexibility as well as easiness. This work presents an approach of leveraging SDN in disaster-resilient backbone networks. Both the advantages and disadvantages of SDN in the context of disasters are discussed. Consequently, the potential solutions are proposed and extensively evaluated.
The Hikari & Tsubasa's information 3 choice question series is an FLASH made interactive educational material to learn the precise knowledge about information security. In this material, 4 university student characters talk each other to find an correct answer. There are 2 materials released by now. "The Information Security 3 choice Class" helps you learn the university's security policy. In "the Information Survival 3 choice Class", you can experience the IT volunteer work where you can hardly find an exact answer. With this material you can learn how to become resilient under the situation of huge disaster.
Social compatibility between privacy information and incentives Exchange privacy information with the service offer
Big data becomes a new challenge to handle the large datasets effectively. A type of valuable data is the personal information, which usually is used in advertising such as location-based/ and behavior-based targeting advertising. A new problem is a privacy issue because of the risk by disclosing personal information. Many people are threatened by their privacy leaking such as spam, scam and crime. We purpose to develop a new trading platform between personal information and service offer. This research aims to find the compatibility between privacy information and incentives, to present a new negotiation mechanism, and then to show the comparison among groups of users.
To Access a service of universities securely via the Internet Development of trustworthy Certificate Issuance System optimized to Universities
Server certificate is essential for authentication and encryption, to provide various university services to faculty staff and students securely through the Internet. To obtain the server certificate, applicant requires several examinations by Certification Authority. However, some examinations are inappropriate to universities because they assume a company as applicant generally. We have developed the Certificate Issuance System that assures the trustworthy as the same as commercial Certificate Authorities, by the optimization of such examinations to University and the automation of some examinations.
To protect the privacy of individuals, a model that is widely used for privacy preservation in publishing micro-data, is k-anonymity. It reduces the linking confidence between sensitive information and specific individual. However, k-anonymous dataset loses its accuracy due to the information loss. Most of the existing k-anonymization approaches suffer from huge information loss. We propose a new model and SpatialDistance (SD) heuristic algorithm based on distance calculation between tuples including numerical and categorical attributes which is independent of attribute hierarchical taxonomies. Our extensive study shows that SD reduces the information loss significantly in comparison with existing well-known algorithms.
Due to improvements in ICT and mobile technologies, hotel reservations for tourists and business travelers are made through online reservation systems. Thus, it is possible to capture the preferences and tendencies of the bookers by monitoring web reservation systems. However, since it is unclear if the online reservation data alone accurately reflects the reality, it is necessary to evaluate the system accuracy . We compare the online reservation data against the actual booking data obtained from accommodation providers to predict and visualize booking prices and room availability.
The purpose of this paper is to study the different graph structures on Twitter.Most studies on Twitter's graph structures employ follower-followee relationships. However, they are rather static and do not provide as much information as studying other relationships such as Retweeter-Retweetee. In this paper, we step further to study the relationships between three different graphs constructed by extracting 1) retweets 2) mentions, and 3)replies. We show that there is a structural difference between the retweet graph and the mention/reply graphs. Finally, we exploit this advantage to predict whether an account is verified or not by Twitter with F-measure of 0.853.
Software technologies are essential for the innovation in 21st century, and their faults put large impacts on our daily life. This poster presentation illustrates how the automated formal verification method can be a scientific basis for achieving the required reliability and safety levels of software-intensive systems.
Finding place names automatically from text GeoNLP: Software environment for the geo-tagging of natural language text
Finding and mapping place names automatically from text has a great need, and this technology is especially powerful when the situation needs to be recognized quickly under crisis. Based on geographic information systems (GIS) and natural language processing (NLP), we develop a geo-tagging software to annotate text with place tags, and establish the infrastructure of toponym information systems with a portal site for sharing place names.
Recognition and communication on the crisis of the society Crisis Informatics
How informatics can contribute to the crisis of the society, such as natural disasters like typhoons and earthquakes, or man-made disasters like nuclear disasters? We investigate how big data should be used for the acquisition, analysis, communication and presentation of crisis-related information.
World of international standards World of international standards
International standardization of library RFID is now being developed under the ISO. Based on the experience of this standardization process, technology, structure of standardization, industry structure of stake holders are analyzed, and features and problems of international standardization processes are studied.
It is crucial to achieve between-person movement synchronization in music or sport performance as well as in daily cooperative activities, but the principle underlying it has not been fully understood yet. If we can grasp the regularities we should be able not only to apply the knowledge to education, physical, occupational, speech, or psychological therapies, etc., but also to design human-agent synchronization. In this presentation I will introduce to you studies on human movement synchronization, including our own contribution to the topic.
Cyber-Physical System (CPS), which integrates information on cyber and physical spaces, is expected to make human society more efficient. As the first step of research on CPS, we use large scale sensor data from cars for the analysis of real-time automatic traffic incident detection. In this approach, we propose a novel feature based on "speed fluctuation" to detect traffic incidents with high precision rate.
How do we promote eco-friendly behavior using SNS ? Cohesive relation and relaxed relation —SNS design for promoting eco-friendly behavior—
Nowadays, SNS is focused as an effective way for promoting eco-friendly behaviors. Nevertheless, participants sometimes drop out from such SNS and difficult to keep doing eco-friendly behaviors. In this study, a SNS design for promoting eco-friendly behaviors controlling communication stress in SNS, which is a sense of obligation of communication, is proposed.
Based on social capital theory, we developed an a smartphone application named "Network Navigator". We envision a society in which people communicate fruitfully each other through the infromation technology. The Network Navigator collects logs of mobile phone, SMS, and Gmail using an irreversible encryption method, visualizesusers' human relationship in terms of frequency of communication and strength of tie, and provides opportunities users to improve their communication.
It is theoretically predicted that cooperation is promoted when human reputation is effectively shared among groups or communities. However, it's not yet been clear in what context in the real world does reputation sharing manifests its effect. Using a novel methodology which capitalizes on human communication data collected through smartphone logs, this study tackles this issue.
WebELS: Realizing a Globalization of Higher Education and Business by Cloud-based e-Communication Platform WebELS: Cloud-based e-Communication Platform
WebELS is a generalized cloud-based e-communication platform for "everywhere, anytime, everyone" use to support an integrated e-Learning/ e-Meeting globally
Haruki Ueno Arjulie John Berena, Sila Chunwijitra, Mohamed Osamnia, Naonori Kato,Hitoshi Okada, Yoshihito Gotoh, Hideomi Koinuma
ANAQONDA Analogy Queries by Ontology-based Data Analytics
Besides the tremendous progress in Web-related technologies, interfaces to access the Web or large information systems have largely stayed at the level of keyword searches and categorical browsing. In this project we explore analogy queries as one of the essential techniques required to bridge the gap between today's interfaces and future interaction paradigms. The intuitive concept of analogies is directly derived from human cognition and communication practices, and is in fact often considered to be the core concept of human cognition. In brief, analogies form abstract relationships between concepts, which can be used to efficiently exchange information needs or transmit even complex concepts including important connotations in a strictly human-centered and natural fashion. Building analogy-enabled information systems opens up a number of interesting scientific challenges, e.g., how does communication using analogies work? How can this process be modeled? How can information systems understand what a user provided analogy actually means? How can analogies be discovered? This project aims at addressing these questions and developing suitable analogy-enabled prototype systems.
What kinds of emotions do online people express in earthquake situations? Twitter emotion analysis in earthquake situations
Social media is becoming a precious and important source of information where users often express their attitudes towards a concerning problem or a particular event. The task of determining these attitudes is called emotion analysis, an application of natural language processing, computational linguistics, and text analytics. Clearly, emotion analysis is classifying users' emotions to different emotion types such as fear, surprise, relief, joy, etc. Because emotion only observed obviously in crisis events like earthquakes, emotion analysis in earthquake situations allow authorities and social managers to understand attitudes and worries of affected people.
Science Information NETwork (SINET) is an information and communication network connecting universities and research institutions throughout Japan. SINET4 commenced operation in April 2011. We provide higher network speed, diverse services, higher edge node stability, higher access lines and upper layer deployment. The "SINET Promotion Office" promotes the use of service as well as last year.
Academic Infrastructure Div., Cyber Science Infrastructure Development Dept.
GakuNin realizes collaborative research environment beyond the barrier between different organizations Development of nationwide collaboration environment by GakuNin
The Academic Access Management Federation in Japan (GakuNin), through ties to university authentication infrastructure, is a system that brings about, as well as intra-school services, one-stop authentication of affiliated universities, external academic cloud services and industrial electronic journals. Through the use of GakuNin, with one account, users can use all the academic resources on the network.In this presentation we introduce the system which manages various groups of GakuNin users beyond the barrier between different organizations. We also present several examples of associated services in production for collaborative research activity.
Academic Infrastructure Div., Cyber Science Infrastructure Development Dept.
Enabling a wide range of users to easily utilize distributed supercomputers including "K Computer" Authentication System for Convenient, Reliable and Secure Access to Distributed Supercomputers (HPCI)
High Performance Computing Infrastructure (HPCI) aims to build computational environment, which meet the needs of various users in academics and industries, by federating the K computer in Kobe as a core system and supercomputers in universities and research institutes in Japan.NII operates the authentication system, including the certificate authority, in HPCI. The authentication system enables single sign-on to computing and storage resources using digital certificates. The user is able to access the resources in a secure and convenient way. Additionally NII operates SINET4 provides network infrastructure in HPCI for using remote supercomputers and sharing large experimental data.
Academic Infrastructure Div., Cyber Science Infrastructure Development Dept.
National Institute of Informatics (NII), in close collaboration with university is attempting to generate and secure content that are indispensable to the academic community, and to build an information infrastructure that will give added value to and broadly transmit these content. Specifically, NII provide comprehensive academic content services, including GeNii(NII Scholarly and Academic Information Portal) and NACSIS-CAT/ILL(Catalog Information Service : Cataloging System / Interlibrary Loan System). NII also support for construction of institutional repositories collecting, preserving, and disseminating research produced in universities.
Content system, Development Office, Scholarly and Academic Information Div., Cyber Science Infrastructure Development Dept.
NII establishes Department of Informatics, School of Multidisciplinary Science at Graduate University for Advanced Studies (SOKENDAI), and offers both 5 year and 3 year doctoral programs.These 2 courses make the best use of the specialty of NII that is pioneering and international research institutions of informatics, and aims at the promotion of the excellent talent who leads "Knowledge society" of the 21st century.It is located in the center of Tokyo, this good location enable busy students with job come to NII easier to learn and research. It has been registered more than 70 students, about half of them are international students, and 30% of them are working students.We guide the outline of Department of Informatics and entrance exam for October 2013 and April 2014.
Graduate University for Advanced Studies (SOKENDAI)
The residential informatics seminars held in the small town of Dagstuhl, in the southwest of Germany, offer researchers a place to exchange ideas and discuss the issues they are currently working on, playing an important role in the promotion of the informatics field. February of 2011 marked the holding of the first "NII Shonan Meeting", modeled on the Dagstuhl seminars and total 20 seminars were held. Through the seminars, we intend that Japan becomes a center of informatics in Asia.
National Archives of Japan, Digital Archive: The Past as a Prologue to the Future
Launched in 2005, the National Archives of Japan (NAJ) Digital Archive provides its catalogue database in a manner that it is connected to high-definition images of its various holdings such as the Constitution, old large maps and scrolls. Outlines of the Digital Archive will be demonstrated together with that of the Japan Center for Asian Historical Records (JACAR), a leading pioneer in this field.
Tokyo Association of Dealers in Old Books launched an antiquarian database in 1998, and this database has been appreciated ever since by researchers and book lovers nationwide. Now, the burning issue I how antiquarian bookshops with rich philological knowledge can cooperate with the young generation, which can make full use of computers. "Nihon-No-Furuhon-Ya" is now in the process of development.
The Research Organization of Information and Systems establishes and operates a core research institute for promoting integrated research on a global level in the areas of polar sciences, informatics, statistical mathematics, and genetics, in collaboration with the research communities at universities and other organizations all over Japan. The Organization also aims to conduct integrated research across disciplines by addressing, from the perspectives of information and systems, issues involving complex phenomena of life, Earth, the natural environment, human society, and other areas, as critical issues for the 21st century.
Currently, cultural information in various fields has been digitally archived. We have developed a system which connect in an integrated manner information for expanding the range of utilization. In cooperation with the NHK Broadcasting Culture Research Institute, we are developing "Broadcasting Culture archive service". This service involves a system which can view the testimony with reference to relevant information, and a chronology system which can organized the matters related to broadcasting by theme and age. In cooperation to the cultural facilities in the region, we are developing a system that can be viewed in association with the space-time information, the old photographs and old maps, which look back the life of the community. We will demonstrate these in the open house.