Integrated Development of Computer Graphics and Vision
In today’s digital era, the integrated development of computer graphics and vision, as a key area that promotes scientific and technological progress, is increasingly attracting widespread attention. From virtual reality to augmented reality, from special effects in movies to medical image processing, the integrated development of computer graphics and vision has profoundly changed the way people live and work. This forum will explore the intersection technologies and applications of computer graphics and computer vision. Computer graphics and computer vision are two closely related fields, and their integration will produce new innovative technologies. This forum will explore how to combine computer graphics and vision technology to achieve deep integration of the virtual and real worlds. From smart glasses to interactive projections, visual fusion technology is leading the future of human-computer interaction. In addition, the fusion of computer graphics and vision also has broad application prospects in fields such as medical imaging and industrial design. This forum will discuss how to use 3D visualization technology to assist doctors in surgical planning and simulation, and how to use virtual prototype and digital twin technology in industrial design to accelerate product innovation and development process. This forum will gather experts from academia to discuss the cutting-edge developments in the integration of computer graphics and computer vision, as well as specific application practices.
Sept. 22th 9:00-12:00
Biography：Weiwei Xu is currently a tenured professor of the State Key Laboratory of CAD&CG, School of Computer Science and Technology, Zhejiang University, and a Changjiang Scholar of the Ministry of Education. He used to be a postdoctoral fellow at Ritsumeikan University in Japan, a researcher in the Network Graphics Group at Microsoft Research Asia, and a distinguished professor of Zhejiang Qianjiang Scholar at Hangzhou Normal University. His main research direction is computer graphics, covering 3D reconstruction, deep learning, physical simulation, and 3D printing. He has published more than 80 papers in high-level academic conferences and journals at home and abroad, including more than 40 CCF-A papers such as ACM Transactions on Graphics, IEEE TVCG, IEEE CVPR, and AAAI. Obtained 15 patents authorized by China and United States. The developed 3D registration and reconstruction technology has been applied in high-precision scanners and human body 3D reconstruction systems. In 2014, he was funded by the National Science Fund for Distinguished Young Scholars, hosted a key project of the National Natural Science Foundation of China, and won the second prize of the Zhejiang Provincial Natural Science Award.
University of Science and Technology of China
Biography：Juyong Zhang, a professor at the School of Mathematical Sciences at the University of Science and Technology of China, received funding from the National Oustanding Youth Foundation and the Excellent Membership of the Youth Innovation Promotion Association of Chinese Academy of Sciences. In 2006, he graduated from the Department of Computer Science, University of Science and Technology of China. In 2011, he graduated from Nanyang Technological University, Singapore. From 2011 to 2012, he worked as a postdoctoral researcher at Swiss Federal Institute of Technology Lausanne. His research field is computer graphics and 3D vision. His recent research interests are efficient and high-fidelity 3D digitization of the real physical world based on neural implicit representation, inverse rendering and numerical optimization methods, and the creation of high-realistic virtual digital content.
Lecture Title：High-fidelity 3D Digitization based on Neural Implicit Representation
Abstract：Efficient and high-precision three-dimensional reconstruction of people, objects, and scenes in the real physical world is a core research issue in the fields of computer graphics, three-dimensional vision, and other fields. Traditional 3D vision and 3D reconstruction usually include multiple steps such as depth acquisition, point cloud registration, and grid reconstruction. The cumbersome technical processes and requirements for hardware equipment make high-fidelity 3D reconstruction and presentation unable to be as popular as 2D images. In recent years, neural implicit functions represented by neural radiation fields (NeRF) have made great breakthroughs in new perspective synthesis and high-precision three-dimensional reconstruction with their powerful fitting expression capabilities and differentiability. In this report, I will introduce the concept of neural implicit representation, various improvements, and its applications in the reconstruction of digital people, objects, and large scenes.
National University of Defense Technology
Biography：Kai Xu is a professor of National University of Defense Technology, funded by the National Science Fund for Distinguished Young Scholars. He is also a visiting scholar at Princeton University. His research directions are computer graphics, 3D vision, robot perception, digital twins, etc. He has published more than 80 CCF A papers, including 29 papers on SIGGRAPH, the top conference on computer graphics. He serves on the editorial board of top international journals such as ACM Transactions on Graphics. He serves as the co-chairman of the papers of international conferences such as GMP 2023 and CAD/Graphics 2017, and the program committee member of conferences such as SIGGRAPH and Eurographics. He serves as the deputy director of the 3D Vision Committee of the Chinese Society of Image and Graphics, the deputy director of the Geometric Design and Computing Committee of the Chinese Society of Industrial and Applied Mathematics, and the director of the Chinese Graphics Society. He has won 2 first prizes of the Hunan Provincial Natural Science Award (ranked 1 and 3 respectively), the first prize of the Natural Science Award of the China Computer Federation (ranked 3), the second prize of the Army Science and Technology Progress Award, and the second prize of the Army Teaching Achievement Award.
Lecture Title：Embodied Intelligence based on Three-dimensional Geometric Perception
Abstract：Visual perception is the most important way for robots to explore, perceive, and understand unknown environments. With the rapid development of 3D sensing technology, 3D graphics are being deeply integrated with robot vision, forming a new way of robot perception and interaction based on 3D geometry, realizing 3D perception and dexterous interaction of robots to unknown environments, and finally supporting robots in 3D environments to achieve embodied intelligence. This report focuses on three aspects of reconstruction, understanding, and interaction, and reports our series of work in recent years, including robust and scalable real-time 3D reconstruction, robot autonomous and cooperative scene scanning and reconstruction, robot active scene understanding, and based on Robot dexterous grasping based on three-dimensional geometric representation learning, etc., and trying to explore the future development direction of embodied intelligence based on three-dimensional geometric perception.
Biography：Dr. Jie Guo is an associate researcher in the Department of Computer Science and Technology at Nanjing University. He received his PhD from Nanjing University in 2013. His current research interest is mainly in computer graphics, virtual reality and 3D vision. He has over 70 publications in internationally leading conferences (SIGGRAPH, SIGGRAPH Asia, CVPR, ICCV, ECCV, IEEE VR, etc.) and journals (ACM ToG, IEEE TVCG, IEEE TIP, etc.). He has developed several applications on illumination prediction, material prediction and real-time rendering, which have been widely used in industry and achieved good economic and social benefits. He is the recipient of JSCS Youth Science and Technology Award, JSIE Excellent Young Engineer Award, Huawei Spark Award, 4D ShoeTech Young Scholar Award and Lu Zengyong CAD&CG High-Tech Award.
Lecture Title：Estimating Material Appearance from a Single Image
Abstract：Building a virtual world that is consistent with the real world has always been a goal pursued by researchers in the field of computer graphics. Material estimation techniques are an essential part of this process. In recent years, deep learning has emerged as an important foundational technology that has driven the development of material estimation techniques and accelerated their practical applications. This talk aims to explore the material estimation problem in real-world scenarios and focuses on solving the problem under a lightweight setting that uses a single image as the input.