EVENT

Event News

Talk by Prof. Ming-Hsuan Yang on Leaning to Synthesize Image and Video contents

Title:

Leaning to Synthesize Image and Video contents

Abstract:

In this talk, I will first review our recent work on synthesizing image and video contents. The underlying theme is to exploit different priors to synthesize diverse content with robust formulations. I will then present our recent work on image synthesis, video synthesis, and frame interpolation.
I will also present our recent work on learning to synthesize images with limited training data. When time allows, I will also discuss recent findings for other vision tasks.

Speaker Bio:

Prof. Ming-Hsuan Yang
University of California, Merced, USA

Ming-Hsuan Yang is a professor at UC Merced and a research scientist with Google. He received Google Faculty Award in 2009 and Faculty Early Career Development (CAREER) award from the National Science Foundation in 2012 and.
Yang received paper awards at UIST 2017, CVPR 2018, and ACCV 2018. He served as a program co-chair for ACCV 2016 and ICCV 2019. Yang is a Fellow of the IEEE and ACM.

Time/Date:

16:00-17:00 / Monday, December 12st, 2022

Place:

Online
If you would like to join, please contact by email.
Email : sugimoto [at] nii.ac.jp

Link:

SUGIMOTO Akihiro - Digital Content and Media Sciences Research Division - Faculty

entry5588

SPECIAL