Research
Kakenhi
Grants-in-Aid for Scientific Research (Kakenhi)
Venturing into a wide range of basic to applied research
Kakenhi are funds that provide broad support for scientific research based on the free ideas of the researchers themselves, and covers a wide range of academic studies spanning from basic to applied research. Both faculty members and researchers actively apply to Kakenhi for grants, and many are approved. The grants obtained from Kakenhi are also distributed to researchers in other institutions (co-investigators) for collaborative research work.
Similarly, many NII faculty members also participate as co-investigators in the Kakenhi-funded projects of researchers at other institutions.
Applications Accepted(FY2024)
| No. of applications accepted | Amount (in thousands of yen) |
|
| Project Leader (Principal Investigator) |
57 | 384,752 |
| Co-investigator (Other institutions → NII) |
64 | 57,994 |
[Model Cases of Research Funded by Kakenhi]
Cyber Vaccine: Core Technologies to Counter Cyber Threats with Generative AI
Grant-in-Aid for Scientific Research (A)
Advances in AI technology and enhanced computing resources have made it technically possible to generate fake videos, audio, and other fake media indistinguishable from the real thing by training AI models with large amounts of information originating from humans, such as faces, bodies, and voices, and this poses a global threat. The principal investigator was the first in the world to propose a method for detecting fake facial images and put this method into practical use.
However, in putting it into practice, he realized that there were fundamental challenges not only with determining whether a video was genuine or not, but also in providing historical information, such as what parts have been falsified and what the original looked like prior to the falsification, as well as preventing the unwanted automated collection and analysis of media by third parties so that it cannot be used to train AI. Accordingly, this research proposes a group of technologies called cyber vaccines for resolving the above issues by carrying out pre-processing of images, videos, and audio media while maintaining the quality of the media. It will develop an original restoration (R-type) vaccine capable of restoring an original piece of media even if it has been falsified with AI and an indecipherable type (I-type) vaccine that makes AI analysis itself impossible, thereby establishing foundational technologies to counter cyber threats with generative AI.

Experimentally Revealing Human Over/Under-trust in AI and Development of AI to Prevent It
Grant-in-Aid for Scientific Research (A)
Principal Investigator: YAMADA, Seiji, Professor, Digital Content and Media Sciences Research Division
With human-AI collaborative decision-making becoming more commonplace due to the spread of ChatGPT and autonomous driving, new issues have emerged. One such issue involves over-trust, where humans place too much trust in AI, and conversely, under-trust, where humans overly distrust AI. In order to mitigate this lack of trust, this research "focuses on human under-trust in AI to reveal this mechanism through cognitive modeling and build AI that can prevent it." The proposed under-trust prediction model, represented by a graph comprising factors and causal relationships, predicts the occurrence of under-trust that designers can design in a top-down manner. The project will also propose Pot-AI, an AI system that can prevent under-trust based on this model. Pot-AI predicts the occurrence of under-trust and mitigates this under-trust in advance by presenting stimuli, or preventive cues, to users when necessary. In designing preventive queues, nudge techniques that encourage behavioral changes while still ensuring human autonomy are used. Pot-AI will then be implemented in autonomous vehicles (Level 3) such as automobiles and drones, where the real-time avoidance of under-trust is essential, followed by verification of the effectiveness of the under-trust prediction model and Pot-AI through participant experiments.

Development of Reliable 3D Sensing Technology Based on Multi-view Learning of Redundant Observations
Grant-in-Aid for Scientific Research (B)
In recent years, generative AI has also had a major impact on the field of 3D measurement, making it possible to reconstruct the complete 3D shape of a subject from a single image and generate arbitrary viewpoints. However, 3D information produced by generative AI based on limited observations is not always accurate in a physical sense, and often contains fabricated information based on intelligence. It is therefore not suitable for industrial measurements, which require resolutions in millimeters or smaller. Accordingly, this research aims to minimize information fabrication and realize highly reliable and accurate 3D measurements by combining multiple redundant pieces of information for observing the same subject, such as images with different wavelengths, or light and sound. Combining information from different modalities is not easy; we have achieved this by using multi-view learning that leverages the knowledge of large-scale models such as generative AI.
Autonomous Control and Integrity Assurance for Large-scale Distributed Systems
Grant-in-Aid for Scientific Research (B)
To realize next-generation wireless communication systems, it is necessary to equip the network with advanced intelligence and autonomy, enabling self-organization, self-management, and self-optimization capabilities.
Building on previous efforts in resource control for wireless communication systems using machine learning, this study aims to address new challenges in preparation for practical deployment. We will construct a framework for distributed learning and autonomous control mechanisms in wireless communication systems that fully leverage the characteristics of autonomous distributed systems while avoiding potential safety risks. To achieve this, we will conduct research on preserving the integrity of distributed learning systems, autonomous resource allocation using multi-agent learning, and ensuring fairness in autonomous control systems. Through these efforts, we aim to improve wireless resource utilization efficiency, enable dynamic access control, and guarantee the integrity and fairness of learning and control systems.
Scalable Automated Program Verification for Concurrent and Parallel Programs
Grant-in-Aid for Scientific Research (A)
In recent software systems that require the processing of huge amounts of data and transmissions, it is important to appropriately perform concurrent and parallel processing. However, since both concurrent and parallel processing exponentially increase the number of possible execution sequences and states of a system, building a correctly functioning concurrent or parallel software system is much more difficult than developing software that is only executed sequentially. This research aims to develop program verification techniques capable of addressing detailed program properties--especially, temporal and state-dependent properties--that are crucial for ensuring the correctness of concurrent and parallel software systems. Our approach is based on type systems, which enable scalable verification by reducing the correctness of an entire software system to that of its individual components.
Development of Average-case NP-completeness Theory Based on Meta-complexity
Grant-in-Aid for Challenging Research (Pioneering)
Principal Investigator: HIRAHARA, Shuichi, Associate Professor,Principles of Informatics Research Division
The theory of NP-completeness, established through the contributions of Cook, Levin, and Karp, provides evidence of computational difficulty in respect of problems of practical importance such as the Traveling Salesman Problem. However, NP-completeness is formulated in terms of worst-case computational complexity, and the existing theory of NP-completeness does not provide an explanation as to whether problems of NP-completeness can be solved in practice. A more realistic concept of complexity is average-case complexity, which measures the behavior of the average algorithm when the inputs are assumed to be generated from a suitable distribution. The ultimate goal of this research is to establish a theory of NP-completeness in terms of average-case complexity, thereby revolutionizing the existing theory of NP-completeness and developing a theory that is more realistic. Specifically, in respect of natural and theoretically important average-case complexity problems such as the planted clique problem, the aim is to provide evidence for average-case computational difficulty of such problems by showing that they are as difficult (complete) as other natural problems.
Building a Distributed Knowledge Base through Hierarchical Alignment of Documents
Grant-in-Aid for Scientific Research (B)
Understanding a document involves the process of linking its content to other information. For example, when researchers read a paper, they can compare its content with other papers and systematically organize the compared details to obtain knowledge that can be put to practical use, such as details of approaches, experimental conditions, evaluation methods, and issues that have not yet been addressed. In reality, however, different documents often refer to the same objects or concepts using different expressions, thus accurately matching knowledge across documents remains a challenge. This research proposes an alignment method that associates information generated by different authors in different contexts, and will verify its effectiveness in multi-document summarization and question answering. It also aims to apply the proposed method to a large-scale corpus of papers and embed the acquired knowledge into large language models, thereby building a knowledge base that will assist users in understanding and utilizing documents.
Desensitization of Algorithms for Decision Making and Knowledge Discovery
Grant-in-Aid for Scientific Research (B)
Principal Investigator: YOSHIDA, Yuichi, Professor, Principles of Informatics Research Division
Data-driven approaches that derive results from large amounts of data are becoming common as a means of decision-making and knowledge discovery, and various algorithms that perform such tasks are being widely used. However, the data received is not always "correct," as when collecting data, some may be missing, contaminated with noise, or changed over time. Under such circumstances, safety, efficiency, and reproducibility can be compromised if the algorithm is highly sensitive; in other words, if the output changes significantly in response to minor input changes. To overcome this, the principal investigator proposed researching algorithms from the viewpoint of sensitivity in 2021. This research involves building algorithms that are theoretically desensitized to problems in decision-making and knowledge discovery, along with demonstrating their usefulness.

NII Today No.105(EN)
Overview of NII 2025
Summary of NII 2024
NII Today No.104(EN)
NII Today No.103(EN)
Overview of NII 2024
Guidance of Informatics Program, SOKENDAI 24-25
NII Today No.102(EN)
SINETStream Use Case: Mobile Animal Laboratory [Bio-Innovation Research Center, Tokushima Univ.]
The National Institute of Information Basic Principles of Respect for LGBTQ
DAAD