This Lab, born in April, 2020, is a research unit for computer vision and machine learning.  We conduct various research topics to replace human visual sensing with machine.  Target applications include, but not limited to, sensing algorithms for autonomous driving and advanced driver assistance systems, anomaly detection, image processing, and multimedia signal processing.  We also work on more basic topics that aid broader applications.  Introduction slides:  [Sato Lab Intro] [Research Theme]

この研究室はコンピュータビジョン・機械学習の研究室として2020年4月に誕生しました。人間の視覚的認識を代替できる高精度な認識技術の開発を目的に研究を行っています。応用として、自動運転や先進運転支援システムにおけるセンサ情報を用いた認識をメインに、異常検知、画像処理一般などを扱っています。また一方で、より多くの応用が可能となるような、基礎寄りの研究も行っています。研究室紹介スライド:[佐藤研の紹介] [研究テーマ]

Co-Adaptation Breaking for Generic Feature Extraction


Generally speaking, human is amazingly good at recognizing the surroundings, often with full of objects, people, structure, etc. Say, if one wants to replace human by machine in driving, a number of recognition tasks such as pedestrian detection, vehicle detection, drivable area recognition roadway recognition, line marking detection, traffic sign recognition, traffic light recognition, etc. need to be processed in real time. The extraction of generic features from sensor input for various recognition tasks is regarded as one of core technologies for autonomous driving. Phenomena known as co-adaptation among neurons often bring situation where feature distribution is excessively complex so that very specific data can be handled. We develop new optimization methods that avoid such unwanted situations and generate well-generalized features. [slides-1] [slides-2]

モビリティにおける省人化を実現するには、多種多様な外環境の認識問題を可解にする必要がある。例えば人が行う運転行動を機械で代替するには、歩行者認識・車両認識・走行路認識・白線認識・信号認識・標識認識を始めとした数多くの認識タスクの実時間処理が求められる。センサー入力から様々な認識タスクに汎用的な特徴量を抽出することは、これら複合的な外界認識器の中核的技術として重要な意味を持っている。しかし一方,深層学習において,ニューロン間の共適合と呼ばれる,特徴分布を過度に複雑化させてしまう事象が知られている.このような事象を抑制し,よりよく汎化された特徴を生成できる最適化手法を開発する.[slides-1] [slides-2]

Retrieval Based on Semantic Distance


Here is a pen. You see it, and immediately understand there is a pen. You never thought of possibilities that it might be a pickup truck or Indian Rhino. Well, current state-of-the-art image classifiers do think such possibilities and return the most likely one. Human measures a sort of distance between what is seen and some instance that was seen before. The development of a technique for estimating the semantic distance between data samples not just contributes to industry, but pushes AI to next stage. It will enable estimation of data rareness and data classification under class indeterminacy. We study way of mapping into semantic space in a computationally efficient fashion.

データ標本間の意味的距離の推定技術の開発は、データ標本のレアリティ評価・データセットの網羅性検証・クラス不定の下でのデータ標本の分類などの効果をもたらしうる技術として、モビリティへの貢献に留まらず、AI 技術における次のブレークスルーと呼ぶにふさわしい基礎的な意義を持つ。意味的距離空間への写像の方法や演算量の削減方法を研究する。

Inheritance of Generalization Capabilities between Machine Learning Models


Inference at the edge devices is required to achieve high generalization capability and real-time processing under limited power consumption. Large-scale deep neural network models consisting of many layers show high generalization capabilities, but the computational load is often too large to embed into an edge device. We study methods for model compactification, such as large-scale knowledge distillation. [slides]


Analysis of Learning Processes in Deep Learning


It is known that the performance of a deep neural network model is highly dependent on an initial set of values called hyperparameters. Selection of hyperparameters is done heuristically most of the time; the relation to the generalization ability is not fully elucidated yet.  In recent years, attempts have been made to describe the learning process in deep learning by thermodynamic motion, which has led to a better understanding of the mechanism of generalization. We conduct research to better understand this mechanism by collecting and analyzing various statistics in the learning process by large-scale computation.


Theory of Model Mixture


Studies of the learning process have shown that a certain inference bias is potentially present in a single deep neural network model. Model mixture is effective in correcting the bias and has been used in many situations, but the method remains in the heuristic phase. Research is focused on establishing a basic understanding and methodology of model mixture. [poster]