top of page

Aug 05, 2018

Prof. Gunhee Kim (김건희)

Director of Vision and Learning Lab / Assistant Professor, Department of Computer Science and Engineering ,Seoul National University

Research: Computer Vision, Machine Learning, Multimedia Data Mining, Storytelling with Big Visual Data.

 

Biography
Gunhee Kim is an assistant professor in the Department of Computer Science and Engineering of Seoul National University from 2015. He was a postdoctoral researcher at Disney Research for one and a half years. He received his PhD in 2013 under supervision of Eric P. Xing from Computer Science Department of Carnegie Mellon University. Prior to starting PhD study in 2009, he earned a master’s degree under supervision of Martial Hebert in Robotics Institute, CMU. His research interests are solving computer vision and web mining problems that emerge from big image data shared online, by developing scalable and effective machine learning and optimization techniques. He is a recipient of 2014 ACM SIGKDD doctoral dissertation award, and 2015 Naver New faculty award.

Topic:   Memory Networks and Their Applications 

Abstract: Recently, neural memory networks have emerged as a powerful architectural paradigm that enables the neural networks to store variables and data over long time scales. In this lecture, I will first introduce fundamentals of memory networks such as Seq2Seq models and attention mechanisms, and discuss some basic models including Neural Turing Machines and end-to-end memory networks. Next, I will present several promising applications of memory networks in computer vision and natural language processing. Some notable examples include video captaining and question answering, video summarization, generative modeling, and text summarization. 

 

Lecture Slide:  part1  &  part2

Teaching Assistants : 清大電機所 石孟立 Leslie Shih / 清大資工所 劉兆寧Tommy Liu

Hands-on Project Slide: link1  link2

Aug 06, 2018

Yu-Chiang Frank Wang, Ph.D.

Director of Vision and Learning Lab / Associate Professor, Department of Electrical Engineering, National Taiwan University

Research: Computer Vision, Machine Learning, Deep Learning, Artificial Intelligence

 

Biography 

Yu-Chiang Frank Wang received his PhD and MS degrees in ECE from Carnegie Mellon University in 2009 and 2004, respectively. He obtained his BS degree in EE from National Taiwan University in 2001. Before joining Dept. Electrical Engineering at National Taiwan University as an associate professor in 2017, Dr. Wang was with the Research Center for IT Innovation (CITI) at Academia Sinica as an Assistant/Associate Research Fellow from 2009 to 2017, where he also served as a Deputy Director at CITI from 2015 to 2017. Dr. Wang leads Vision and Learning Lab at NTU, focusing on the research topics of computer vision and machine learning, particularly on the challenges and applications of transfer learning. Dr. Wang and his team received the First Place Award at Taiwan Tech Trek by the National Science Council (NSC) of Taiwan in 2011. In 2013 and 2017, he was twice selected as the Outstanding Young Researcher by NSC/MOST (Ministry of Science and Technology). His team also won the Second Place Award for the research poster presentation at NVIDIA GTC Taiwan 2018.

Topic:    Deep Transfer Learning for Visual Analysis

Abstract:Aiming at bridging the semantic gap between data domains, transfer learning leverages the information learned from one domain to another, which can be applied to a variety of real-world computer vision tasks. In this lecture, I will go over the backgrounds, settings, and fundamental techniques of transfer learning. State-of-the-art deep learning models for transfer learning will also be covered. I will discuss how to apply transfer learning techniques for solving visual analysis and synthesis tasks, including latest CVPR/ICCV/ECCV works.

Lecture Slide :  part1   part2   part3

Teaching Assistants : 台大電信所 李宇哲 Yu-Jhe, Jack Li / 台大電信所研究助理 陳尚甫Shang-Fu, Sam Chen

Hands-on Project Slide: link

Aug 07, 2018

Prof. Jia Deng

Director of Vision and Learning Lab / Assistant Professor, Computer Science and Engineering, University of Michigan

Research: Computer Vision and Machine Learning, in particular, achieving human-level visual understanding by Integrating Perception, Cognition, and Learning.

Biography

Jia Deng is an Assistant Professor of Computer Science and Engineering at the University of Michigan. His research focus is on computer vision and machine learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the PAMI Mark Everingham Prize, the Yahoo ACE Award, a Google Faculty Research Award, the ICCV Marr Prize, and the ECCV Best Paper Award.

Topic: Semantics beyond Objects

Abstract: To Be Updated

Teaching Assistants : 清大資工所 曾守曜 Roy Tseng/ 清大電資院 賈松昊 Gasoon Jia

Hands-on Project Slide: link

Colab: link1 link2

Aug 08, 2018

Prof. Joseph Lim

Director of Vision and Learning Lab / Assistant Professor, University of Southern California

Research: Computer Vision, Machine Learning, Graphics, and Robotics

 

Biography

I was a postdoctoral scholar at Stanford Artificial Intelligence Laboratory with the Computer Vision group led by Professor Fei-Fei Li. Before that, I completed my PhD at Massachusetts Institute of Technology under the guidance of Professor Antonio Torralba, and also had a half-year long postdoc under Professor William Freeman. I received my bachelor degree at the University of California - Berkeley, where I worked in the Computer Vision lab under the guidance of Professor Jitendra Malik. I also have spent time at Microsoft Research, Adobe Creative Technologies Lab, and Google.

Topic:   Vision for Interaction

Abstract: To Be Updated

Teaching Assistants : 清大電機所 王尊玄Johnson Wang/ 中央研究院 研究助理 傅子睿 Tsu-Jui Fu

Hands-on Project Slide: link

Colab:link

Aug 09, 2018

Prof. Alexander Schwing

Assistant Professor, Department of Electrical and Computer Engineering / Affiliate with Coordinated Science Laboratory, University of Illinois at Urbana-Champaign

Research: Machine learning and Computer Vision, particularly interested in Algorithms for prediction with and learning of non-linear, multivariate and structured distributions, and their application in numerous tasks, e.g., for 3D scene understanding from a single image.

Biography:

Alex Schwing is an Assistant Professor at the University of Illinois at Urbana-Champaign working with talented students on computer vision and machine learning topics. He received his B.S. and diploma in Electrical Engineering and Information Technology from Technical University of Munich in 2006 and 2008 respectively, and obtained a PhD in Computer Science from ETH Zurich in 2014. Afterwards he joined University of Toronto as a postdoctoral fellow until 2016. His research interests are in the area of computer vision and machine learning, where he has co-authored numerous papers on topics in scene understanding, inference and learning algorithms, deep learning, image and language processing and generative modeling. His PhD thesis was awarded an ETH medal. For additional info, please browse to http://alexander-schwing.de/

Topic: Generative Models

Abstract: In this lecture we will discuss generative modeling, covering the basics (kMeans and Gaussian mixture models) as well as recent advances (Generative Adversarial Nets and Variational Auto-encoders). A basic understanding of linear algebra and probability is assumed.

Lecturer Slide : link

Teaching Assistants : 清大電機系研究助理 林杰 Hubert Lin/ 清大資工所畢/ MediaTek實習生 張嘉哲 Mike Chang

Hands-on Project Slide: link

Aug 05, 2018

Invited Talk : Chin-Wei Huang 17:40-18:20

PhD student at the MILA, University of Montreal (UdeM)

Biography

I am a PhD student at the Montreal Institute for Learning Algorithms (MILA), University of Montreal (UdeM), advised by Aaron Courville. My research mainly focuses on deep generative models with latent variables, approximate inference, and Bayesian deep learning. I also work part-time at Element AI with Alexandre Lacoste on transfer learning.

Topic: Autoregressive Flows for Image Generation and Density Estimation

Abstract: In this talk, I am going to introduce two families of generative models with tractable likelihood function: autoregressive models and change-of-variable models, which is also known as the normalizing flows. The two families of approaches have found applications in high quality natural image generation and in density estimation (evaluation and detection). Specifically, the recent discovery of the connection between the two allows for advancement in modelling high dimensional data with theoretically guaranteed expressiveness and tractability.

Talk Slide: Link

Aug 05, 2018

Invited Talk : Prof. Torbjörn Nordling in Welcome Reception 19:00-19:20

Assistant Professor National Cheng Kung University Department of Mechanical Engineering

Biography

Dr. Torbjörn Nordling obtained both his Ph.D. in Automatic Control

(2013) and his M.Sc. in Engineering Physics (2005) from KTH Royal Institute of Technology in Stockholm, Sweden. He has specialised in Mathematical modelling, System identification, and Machine learning with applications in Biology and Medicine. He has developed both new theory and methodology, most notably, for robust network inference and variable selection.

 

He is since 2015 an Assistant Professor at the Dept. of Mechanical Engineering at National Cheng Kung University in Taiwan. Previously, he has done a PostDoc at the Dept. of Immunology, Genetics and Pathology at Uppsala University in Sweden. He has been a visiting researcher at Telethon Institute of Genetics and Medicine in Naples, Italy and ERATO Kitano Symbiotic Systems Project at Japan Science and Technology Agency in Tokyo, Japan. He is the founder of Nordron AB, a startup specialised in data analysis, and a co-founder of Jagah Systems AB, an award winning indoor geolocalisation startup. He has co-authored more than 10 peer-reviewed journal articles and given numerous oral and poster presentations. His research is currently focused on Artificial intelligence, Machine learning, and Systems biology.

Topic:   Machine vs. Human 7-2

Abstract:

Artificial intelligence (AI), in particular Deep learning (DL), has since the ImageNet LSVRC-2012 contest established itself as a core technology driving the 3rd industrial revolution with many commercial applications. This rapid success of Artificial narrow intelligence (ANI) is due to four factors: big labelled data, GPU accelerated distributed computing, open source software, and algorithms. Deep learning has within the last five years enabled computers to go from worthless to superhuman solution of many problems, such as skin cancer diagnosis, lip reading, image recognition, and image description. I will in a light manner present seven areas in which machines recently have beaten human performance.

 

However, computers are less power and data efficient. Training of artificial neural networks (ANNs) in general require thousands, if not millions, of examples, while humans can learn from a single example.

Human like unsupervised learning has by Facebook’s Yann LeCun been called the “holy grail” of AI research. In my lab, we are currently building smart toys to collect longitudinal data on how children learn with the aim of creating more data efficient training methods. I am looking for collaborators and students interested in participating in our research.

 

Talk slide : link

Please reload

bottom of page