Towards Better Adversarial Synthesis of Human Images from Text


This paper proposes an approach that generates multiple 3D human meshes from text. The human shapes are represented by 3D meshes based on the SMPL model. The model performance is evaluated on the COCO dataset, which contains challenging human shapes and intricate interactions between individuals. The model is able to capture the dynamics of the scene and the interactions between individuals based on text. We further show how using such a shape as input to image synthesis frameworks helps to constrain the network to synthesize humans with realistic human shapes.

Authors: R. Briq, P. Kochar, J. Gall

Download here

Adversarial Synthesis of Human Pose from Text

Published in German Conference on Pattern Recognition (GCPR) 2020

This work focuses on synthesizing human poses from human-level text descriptions. We propose a model that is based on a conditional generative adversarial network. It is designed to generate 2D human poses conditioned on human-written text descriptions. The model is trained and evaluated using the COCO dataset, which consists of images capturing complex everyday scenes with various human poses. We show through qualitative and quantitative results that the model is capable of synthesizing plausible poses matching the given text, indicating that it is possible to generate poses that are consistent with the given semantic features, especially for actions with distinctive poses.

Authors: Y. Zhang, R. Briq, J. Tanke, J. Gall

Download here

Unifying Part Detection and Association for Recurrent Multi-Person Pose Estimation


Current bottom-up approaches for 2D multi-person pose estimation (MPPE) detect joints collectively without distinguishing between individuals. Associating the joints to individuals is done independently of the learning algorithm, therefore requires formulating a separate problem in a post-processing step that relies on relaxations or sophisticated heuristics. We propose a differentiable learning-based model that performs part detection and association jointly, thereby eliminating the need for further post-processing. The approach introduces a recurrent neural network (RNN), which takes dense low-level features as input and predicts the heatmaps of a single person joints in each iteration, then refines them using a feedback loop. In addition, the network learns a stopping criterion in order to halt once it has identified all individuals in an image, allowing it to output any number of poses. Furthermore, we introduce an efficient implementation that allows training on memory-constrained machines. The approach is generic and can be combined with any bottom-up approach. The approach is evaluated on the challenging MSCOCO and OCHuman datasets and achieves a substantial improvement over the baseline. On OCHuman, which contains severe occlusions, we achieve state-of-the-art results even compared to top-down approaches. Our results demonstrate the advantage of a learning-based detection and association framework, and bottom-up approaches over top-down approaches in challenging scenarios.

Authors: R. Briq, A. Doering, J. Gall

Download here

Convolutional Simplex Projection Network for Weakly Supervised Semantic Segmentation (CSPN)

Published in British Machine Vision Conference (BMVC) 2018

Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new optimization approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN substantially improves the accuracy of the baseline architecture and allows for faster convergence.

Authors: R. Briq, M. Moeller, J. Gall

Download here

Online Robust Learning Using the Radon Point

MSc thesis, University of Bonn 2017

This thesis analyzes a novel approach for model synchronization in distributed online learning from noisy data streams. The proposed approach combines weak hypotheses that have been computed locally by replacing them with their Radon point (akin to the center point, or median in 1-dim). These hypotheses may be learned by a wide range of online learning algorithms thus making the approach black-box. The work encompasses both the theoretical and empirical aspects of the method. The theoretical analysis focuses on proving probabilistic guarantees on the error bound. We show that on noise-free streams, the approach satisfies strong probabilistic error guarantees within the framework of PAC (Probably Approximately Correct) learning. Additionally, under strict assumptions, it provides a method for converting regret bounds of standard online learning algorithms to PAC error bounds. The empirical part focuses on evaluating the practical aspect of the approach. It shows that the proposed approach outperforms state of the art approaches on noisy data streams.

Download here