Unions of Things, Mostly in The Plane

  • Boris Aronov, Polytechnic Institute of NYU
  • Oct 9th 2012, 11:00am, room C005

I will discuss a class of problems that are loosely called "unions of objects." We study shapes that are formed by taking a union of a finite collection of well-behaved objects, mostly in the plane. In particular, we are concerned with how complicated such unions can be, in terms of the number of object boundary intersections on their boundary. I will start with a motivating example or two, then state the general problem and provide some historical perspective on how the problem evolved. I will end with an update to the cutting-edge results in the field.

Several tantalizing open problems will be mentioned.

The content of this talk is loosely based on a 2008 survey "State of the union (of geometric objects)" by Agarwal, Sharir, and Pach, and the speaker's own involvement in the subject. Motivation, examples, and sketches of the easier proofs will be provided. The discussion will be largely limited to two dimension due to the speaker's inability to draw anything complicated.

Geometry and Mechanics of fibers: some numerical models

In this talk I will give an overview of our work on the simulation of fibers and entangled materials, such as hair, with a specific interest for virtual prototyping and computer graphics applications. I will first introduce a family of high-order, reduced models for discretizing Kirchhoff's equations for thin elastic rods in a both faithful and robust way. Such models are particularly suited for simulating inextensible fibers subject to bending and twisting, and featuring an arbitrary curly resting geometry. Then I will show how such models can be coupled to frictional contact using the nonsmooth contact dynamics framework, and I will present a hybrid iterative solver suitable for robustly handling thousands packed fibers at reasonable frame rates. Finally, I will give some insights into the inverse modeling of fibers, consisting in taking an arbitrary curve geometry as input and infering corresponding geometric and physical parameters of the simulator such as the input geometry corresponds to a stable configuration at equilibrium.

Approches multi-échelle pour la reconnaissance de cercles dans des images bruitées

  • Antoine Vacavant, ISIT, Clermont-Ferrand
  • 28 juin 2012, 10h, salle B013

Ce séminaire porte sur la présentation d'une approche originale de reconnaissance de cercles à partir de données bruitées. Elle s'appuie en premier lieu sur le détecteur de bruit de Kerautret-Lachaud [1] pour obtenir une représentation multi-échelle du contour bruité en entrée. Ensuite, la topologie de cette représentation est simplifiée en construisant un ensemble de cellules isothétiques irrégulières [2]. Enfin, un ensemble de cercles solutions est calculé par une séparation de points, basée sur des résultats théoriques récents sur les cercles discrets [3]. A l'occasion de ce séminaire, sera aussi présentée une version incrémentale de cet algorithme en cours de développement, visant à calculer une décomposition en arcs de cercles.

[1] B. Kerautret and J. Lachaud. Multi-scale analysis of discrete contours for unsupervised noise detection. In IWCIA 2009, volume LNCS 5852, pages 187-200. Springer, 2009. [2] A. Vacavant, D. Coeurjolly, and L. Tougne. Topological and geometrical reconstruction of complex objects on irregular isothetic grids. In DGCI 2006, volume LNCS 4245, pages 470-481. Springer, 2006. [3] E. Andres, T. Roussillon, and J.-L. Toutant. Digital circles, spheres and hyperspheres : From morphological models to analytical characterizations and topological properties. Discrete Applied Maths, 2012. To appear.

Measuring Geometry Data Depth

  • Nabil Mustafa, ESIEE, Paris Est
  • June 22, 2012, 3:00pm, room C005

We are given some geometric data in the form of a set P of n points in R^d . Then the aim of combinatorial data approximation is to give a succinct summary of the shape of P (e.g., spread and distribution of the points in P). This becomes especially important as the data becomes unmanageably large (e.g., coming from a live-stream), and so only a small 'sketch' can be stored. In this talk I will present various algorithms and techniques for approximating a set of points with a smaller set. I will first present and explain the notion of a centerpoint, which is the generalization of the concept of a median to higher dimensions. Then I will present various related notions, including Simplicial-depth. and Ray-shooting depth. Some recent results, as well as several open problems will be presented.

Free-form deformation of large voxel grids

  • Noura Faraj, Telecom Paris
  • June 21, 2012, 1:20pm, room B13

The increasing accuracy of volumetric acquisition systems provides high fidelity 3D digital models from real objects. These models are acquired in a certain state/posture; while for many applications, different poses are necessary. Because it is impossible to capture the whole variety of all potential configurations, deformation processes have been developed. While physical simulations, lack generality, hand-made poses are cumbersome to produce. Further, these manipulations often deteriorate the data sets. In this talk, we will present "VoxMorph" an interactive freeform deformation tool for high resolution voxel grids, which ensures limited distortions and avoids mass loss. This system is easy to use, even for novice users, robust, scalable, and enables out-of-core processing of data sets that do not fit into main memory. Hereby, we avoid costly (money and time) scanning and deliver a large variety of different models.

Comptage des mots engendrés par intervalles

  • Anna Frid (Sobolev Institute of Mathematics, Novosibirsk (Russie)
  • April 26, 2012, 2:00pm, room to be announced

Nous considerons la famille des problèmes reliés au comptage des mots sur un alphabet fini engendrés par intervalles, y compris les mots sturmiens et les mot engendrés par l'echange de trois intervalles. Nous discutons des méthodes géometriques et combinatoires de résolution, et donnons des exemples de leur usage.

Multiplicity Preserving Triangular Set Decomposition of Two Polynomials

  • Jinsan Cheng, Academy of Mathematics and Systems Science
  • November 22, 2011, 11:00am, room A 006

In this talk, a multiplicity preserving triangular set decomposition algorithm is proposed for a system of two polynomials. The algorithm decomposes the variety defined by the polynomial system into square-free and disjoint unmixed components represented by triangular sets, which may have negative multiplicities. Thus we can count the multiplicities of the non-vertical components. In the bivariate case, we give a complete algorithm to decompose the system into zeros represented by triangular sets as well as their multiplicities. We also analyze the complexity of the algorithm in the bivariate case. We implement our algorithm and show the effectiveness of the method with extensive experiments. This work is joint with Xiao-Shan Gao.

Représentation de contours discrets et MLP dynamique

  • Xavier Provençal, Université de Chambéry
  • October 8, 2011, 11:00am, room B013

Le Polygone de Longueur Minimale (MLP) est une approximation du première ordre des contours discrets. Cette construction a récemment été employée comme estimateur de longueur dans le cadre de segmentation par modèles déformables. Dans ce contexte, un MLP est calculé afin d'estimer le périmètre d'un contour discret; ce contour est modifié en y ajoutant ou en y retirant un pixel et son périmètre doit alors être actualisé. Je présenterai un cadre théorique menant à une représentation polygonale des contours discrets, le MLP étant un cas particulier de ces représentations. Des règles locales seront ensuite introduites afin d'obtenir un algorithme permettant la modification d'un contour discret ainsi que la mise à jour de son MLP en un temps sous-linéaire en fonction de la longueur de la partie modifiée du MLP.

Prediction of protein-protein interactions and interfaces: insight from aspecific docking

  • Juliette Martin (CNRS, IBCP Lyon)
  • October 11, 2011, 3:00pm, room C005

When one attempts to dock a protein with proteins which are not known to be native interators -here called "random partners"-, they seem to dock in a non-random manner, but cluster near the native protein interface. This fact has been observed by our group and others on data sets of 3, 6 and 56 protein complexes (Fernandez-Recio J et al, 2004, Sacquin-Mora S et al, 2008, Wass M N et al, 2011, Martin J, unpublished).

In this work, we explore this property in further details. We considered a non-redundant set of 198 proteins with structures available in complexes and isolated forms, and a set of 334 protein domains, chosen to sample the protein structural space, as source of random partners. Thanks to the docking software hex, running on GPU, We performed a great number of aspecific docking experiments in order to understand the process of aspecific docking and the nature of binding sites identified in this way.

Une nouvelle construction géométrique de codes sur de petits corps

  • Alain Couvreur (Projet COCQ, Institut de Mathématiques de Bordeaux)
  • June 9th, 2011, 10:30, room B013

Nous rappellerons tout d'abord quelques rudiments de la théorie des codes correcteurs et des codes géométriques. Nous nous focaliserons ensuite sur le problème classique (et souvent difficile) de rechercher de bons codes de longueur n à coefficients dans {\bf F}_2 . Une approche classique consiste à choisir un bon code défini sur une extension {\bf F}_{2^m} de {\bf F}_2 et de le « descendre » sur {\bf F}_2 via une opération arithmétique classique (restriction, trace, etc...) Si le code défini sur {\bf F}_{2^m} est un code géométrique construit sur une courbe de genre 0 et judicieusement choisi, l'opération de restriction à {\bf F}_2 donne des codes bien meilleurs que dans le cas « générique ». Dans cet exposé, nous présenterons une généralisation de cette approche aux courbes de genre quelconque basée sur une utilisation l'opérateur de Cartier. Cette approche permet ainsi de construire d'intéressantes familles asymptotiquement bonnes de codes sur {\bf F}_2 .

Intersection patterns of convex sets

  • Martin Tancer
  • May, Friday 27th, 11h-12h, salle C103

Classical Helly's theorem states that if there are at least d+1 convex sets in d-dimensional Euclidean space such that each d+1 of them have a nonempty intersection then all the sets have a nonempty intersection. This theorem can be seen as a theorem on intersection patterns of convex sets.

The question what are possible intersection patterns of convex sets can be formalized via simplicial complexes. This question is algorithmically hard; however, there are many necessary (and also sufficient) conditions. The task of the talk is to introduce this topic and also present some new results such as nonrepresentability of finite projective planes by convex sets in a fixed dimension.

Minimum-cut in Networks and some applications in Image Processing

  • Jerome Darbon, ENS Cachan
  • May, Fri. 27th, 10:00-11:00, room B011

Many image processing problems can be formulated as the minimization of a Markovian energy. In this talk, a combinatorial point of view is considered. First, I will briefly review the combinatorial tools. Then I'll describe a combinatorial algorithm for minimizing the Total Variation (which allows to remove noise while preserving contours and edges in signals). Several other applications will be illustrated such as crystalline mean curvature flow, deconvolution, compressive sensing reconstruction.

On the nonexistence of k-reptile simplices in R^3 and R^4.

  • Zuzana Safernová
  • May, Wed. 25th, 11:00-12:00, room C103

A d-dimensional simplex S is called a k-reptile if it can be tiled without overlaps by simplices S_1,S_2,...,S_k that are all mutually congruent and similar to S. For d=2, k-reptile simplices (triangles) exist for many values of k and they have been completely characterized by Snover, Waiveris, and Williams. On the other hand, for d >= 3, only one construction of k-reptile simplices is known, the Hill simplices, and it provides only k of the form m^d, m=2,3,...

We prove that for d=3, k-reptile simplices (tetrahedra) exist only for k=m3. This partially confirms a conjecture of Hertel, asserting that the only k-reptile tetrahedra are the Hill tetrahedra.

We also prove that for d=4, k-reptile simplices can exist only for k=m2. For d=4 it remains to find out whether there really exist m2-reptile simplices for m non-square and whether all m4-reptile simplices are indeed Hill simplices.

First part is joint work with Jiří Matoušek, the second one with Honza Kynčl.

Orbitwise polyhedral representation conversion

  • David Bremner, U. New Brunswick, Canada
  • May, Friday, 20th, 11:00-12:00 Room C201

onverting between the inequality and finite-generator representations of a convex polyhedron is both a fundamental problem in algorithmic geometry and a useful subroutine in various optimization techniques. Unfortunately many problems of interest remain out of reach for current conversion methods. In certain applications the output is both large and symmetric. This has motivated study of the problem of \emph{orbitwise} representation conversion: instead of producing all of the output, one looks for at least one element in each orbit (under some natural symmetry group).

I will start by surveying existing methods for orbitwise enumeration. These can be broadly classified into methods based on recursive problem decomposition, and those that ``symmetrize'' standard methods based on pivoting and incremental construction.

One common theme in efficient implementations of existing methods is the use of invariants to avoid expensive isomorphism tests. Another approach to avoiding isometry tests, and also permitting a more straightforward use of perturbation, is to construct a linear approximation of a fundamental domain for the given symmetry group. I'll finish the talk by describing ongoing work using this fundamental domain approach in incremental and pivoting based algorithms.

Algébricité de la série génératrice complète des chemins de Gessel

  • Alin Bostan (Projet ALGORITHMS, INRIA Paris-Rocquencourt)
  • April 14th, 2011, 10:30, room B13

Nous montrerons que certaines questions de combinatoire énumérative peuvent être traitées de façon systématique en utilisant une approche de type « mathématiques expérimentales » guidée par des algorithmes modernes du calcul formel. Nous décrirons la découverte et la preuve assistées par ordinateur de l'algébricité de la série génératrice des chemins confinés au quart de plan et utilisant des pas de longueur un vers l’est, l'ouest, le sud-ouest, ou le nord-est.

Convexité et circularité discrète en analyse d'images

  • Tristan Roussillon, ATER à l'université Lyon 2, membre du LIRIS (équipe m2disco)
  • March 17th, 2011, 11:15, room B13

L'image numérique, qui est le résultat de l'échantillonnage d'un signal observé, est un arrangement régulier de petits éléments carrés distincts dotés d'une couleur. La géométrie discrète, qui se propose d'étudier les propriétés géométriques d'un tel espace dépourvu de continuité, offre un éventail de techniques dédiées au traitement et à l'analyse des images numériques.

Dans ce cadre, les régions homogènes et porteuses de sens d'une image sont considérées avec l'objectif de représenter leur contour, vus comme des chemins 7-connexes (en forme d'escalier). L'exposé porte sur les propriétés de convexité et de circularité de ces contours.

Dans la première partie, je présenterai une analyse locale de la convexité discrète menant à des résultats théoriques (étendue du voisinage à considérer), ainsi qu'à des applications pratiques (représentation polygonale réversible respectant les parties convexes et concaves, estimation de longueur). Dans la seconde partie, j'introduirai d'abord le problème de la séparabilité circulaire et son interprétation géométrique dans l'espace des cercles et dans son espace dual, puis je présenterai un algorithme de reconnaissance d'arcs de cercle discret incrémental et d'extraction de tous les arcs de cercle discret maximaux. De cette représentation est déduit un estimateur de courbure dont on observe expérimentalement qu'il est convergent.

Primitives discrètes non linéaires : une approche par les Digital Level


  • Laurent Provot, post-doctorant à l'ISIT, Clermont-Ferrand
  • March 8th, 2011, 3:00, room B13

Un des objectifs fondamentaux de la géométrie discrète est d'essayer de fournir un cadre théorique pour transposer dans Z^n des notions bien connues de la géométrie euclidienne telles que les courbes et surfaces algébriques. Si les primitives linéaires comme les droites ou les plans discrets sont à présent bien maîtrisées, peu d'études ont été menées sur les primitives discrètes non linéaires -- hormis le cercle --, peut-être dû au fait qu'il n'y ait pas consensus sur leur définition. Il existe en effet plusieurs approches permettant d'introduire de tels objets, avec chacune leurs avantages et inconvénients. Dans cet exposé, je présenterai une approche explorée avec Yan Gérard, de l'ISIT à Clermont-Ferrand, permettant de définir de manière analytique des primitives disctètes non linéaires à travers la notion de Digital Level Layers. Je décrirai un algorithme de reconnaissance pour de telles structures et je proposerai quelques exemples d'applications mettant à l'usage ces DLL.

Segmentation multi-échelles de matériaux granulaires par ligne de partage des eaux stochastique

  • Luc Gillibert, Centre de Morphologie Mathématique de l'Ecole des mines de Fontainebleau
  • March 4th, 2011, 2:00, room B13

Nous présentons une méthode de segmentation des matériaux granulaires tolérante au bruit et fonctionnant sur des images multi-échelles. L'idée de l'approche est d'extraire à l'aide d'outils morphologiques la granulométrie de l'image, puis d'utiliser cette granulométrie comme référence pour plusieurs segmentations successives correspondant à différentes tailles de grains. Chacune de ces segmentations est calculée en partant d'une ligne de partage des eaux stochastique (LPES, [1]). La LPES se définit comment étant une fonction de densité de probabilités des frontières si on effectue un grand nombre de calcul de lignes de partage des eaux avec des marqueurs tirés aléatoirement. Le nombre de marqueurs utilisés étant imposé par la granulométrie [2]. Cette LPES peut se calculer efficacement à l'aide d'un graphe dont les sommets sont ensuite fusionnés pour obtenir une hiérarchie sur les frontières de l'image. La méthode est validée sur un modèle aléatoire à base de sphères bruitées et sur des images réelles issue de microtomographie aux rayons X. Le matériau étudié est du propergol butalite dont les grains ont été fragmentés par un choc mécanique.

[1] Angulo, J., Jeulin, D.: Stochastic watershed segmentation. Proceedings of ISMM'2007, 8th International Symposium on Mathematical Morphology. [2] Faessel, M., Jeulin, D.: Segmentation of 3D microtomographic images of granular materials with the stochastic watershed. Journal of Microscopy, Volume 239, Issue 1, 2010.

Calcul symbolique-numérique des séries de Puiseux

  • Adrien Poteaux, LIP6, Université Paris 6
  • March 3rd, 2011


Matching Fluid Simulation Elements to Surface Geometry and Topology

  • Christopher Batty, UBC
  • Oct 27th, 2:00, room A006

We introduce an Eulerian liquid simulation framework based on the Voronoi diagram of a potentially unorganized collection of pressure samples. Constructing the simulation mesh in this way allows us to place samples anywhere in the computational domain; we exploit this by choosing samples that accurately capture the geometry and topology of the liquid surface. When combined with high-resolution explicit surface tracking this allows us to simulate nearly arbitrarily thin features, while eliminating noise and other artifacts that arise when there is a resolution mismatch between the simulation and the surface—and allowing a precise inclusion of surface tension based directly on and at the same resolution as the surface mesh. In addition, we present a simplified Voronoi/Delaunay mesh velocity interpolation scheme, and a direct extension of embedded free surfaces and solid boundaries to Voronoi meshes.

Real-time Interventional Radiology Simulation in Complex Vascular Models

  • Vincent Luboz, Imperial College, London
  • Monday, October 25th, 09:30, amphiteater

The base of all training in interventional radiology is the development of the core skills in manipulating the instruments in various vascular anatomies. Computer simulators are emerging to help in this task. I will present our training framework which proposes realistic instrument behaviour in complex vascular models. The medical instruments are modelled as a mass-spring particle system while the vasculature is a triangulated surface mesh segmented from patient datasets. The segmentation method using level set and skeletonization is essential for real time simulation. A specially designed commercial haptic device allows the trainee to use real instruments to guide the simulation through the vasculature of 28 different patients. A new collision detection algorithm allows an efficient computation of the contacts, therefore leaving more time to deal with the collision response for a realistic simulation in real time. The behaviour of our simulated instruments has been validated in a silicon vascular phantom and preliminary results show close correlations and a realistic behaviour.

"Bag of lines" models for non-central linear cameras

  • Guillaume Batog, VEGAS
  • Friday, October 15th, 11:00 am, room C103

Reconstruction techniques, such a stereography, are easily deduced from a geometric model of the pinhole camera as a set of rays passing through a point. Given two 2D retinal points, consider the two corresponding rays passing through it (using inverse projection): Their intersection means that both points are in stereographic correspondance.

In this work, we extend such an application to a class of central and non-central imaging devices, called "linear cameras", that subsumes pinhole, pushbroom, X-slit, pencil and linear oblique cameras. We propose for that a unified model from which we derive closed-form formulas for projection and stereo-correspondance between "any" pair of linear cameras. The main ingredient is a decomposition of a camera into a retina and a bag of lines, the latter being represented by a linear "admissible map" that globally preserves any line of that bag.

Mid-level Representation for Images and Videos

  • Sylvain Paris, Adobe Research
  • Friday , October 8th, 10:00 am (room to be announced)

Digital photo and video cameras are now a commodity, even cell phones can record high-definition footage. This rapid development has fostered the field of computational photography that aims for leveraging computers to help photographers. I will present a few of my recent projects in this area that rely on mid-level information, that is, local cues such as edges, texture, or motion. In particular, I will show that choosing an appropriate representation for these mid-level cues can help design effective algorithms for edge-aware image processing, illumination analysis, and video editing.

Computer Graphics techniques for analyzing and enhancing shape information

  • Michela Mortara, Institute for Applied Math. and Inf. Tech., Genoa
  • Thursday, Sept 16, room A006

Understanding the implicit knowledge carried by a digital shape is not a trivial task. In general, characterizing a shape amounts to construct a computational description of the most representative features of the shape, usually a few basic types, possibly along with their relationships (structural decomposition). This and other aspects of shape modeling and reasoning have been studied for many years at the department of Genova of the Institute of Applied Mathematics and Information Technologies (IMATI-GE / CNR). Recent acquisition tools are able to produce huge volumes of precise 3D data, and understanding such complex shapes would provide a significant boost to processes where shape models play a fundamental role. An example is structural biology, where the geometric shape of molecular surfaces strongly influences the docking processes: in this setting, a meaningful morphological characterization of molecular surfaces into salient features like cavities and depressions can support the identification of potential binding sites. Having this in mind, understanding a shape requires tools that, by analyzing the information available (geometry, topology, application context, ?), generate effective high-level computational models that describe shapes and their structures. This talk gives an overview of the main techniques developed so far at IMATI-GE / CNR for analyzing and enhancing shape information. In particular, the multiscale surface characterization algorithm named Tailor will be detailed, which identifies morphological features at different scales on surfaces represented by triangular meshes; a parallel implementation of the algorithm has also been developed to cope with huge data sets. From the /Tailor/ characterization, a segmentation method has been derived which is especially suited to identify tubular parts along with structural and geometric attributes (/Plumber/). Some application examples of such shape analysis tools will be discussed.

Hardness of embedding simplicial complexes

  • Martin Tancer, Charles University, Prague
  • Monday, Sept 6, 11 am, room C103

A simplicial complex is a general representation of a geometric space as a collection of simplices glued along common subfaces. I will show that the problem of deciding if a simplicial complex of dimension at most 2 can be embedded in R^4 is NP-hard.

Compact stencils with good isotropic properties for image gradient and Laplacian

The most of discrete gradient and Laplacian operators (masks) commonly used in Image Processing have poor rotational invariance properties. In this talk, I present a theoretical study of equivariant 3x3 explicit and implicit stencils for the image gradient and Laplacian. Two simple approaches are discussed. The first approach is based on frequency analysis of stencils. The second approach consists of representing differential expressions by circular (spherical) means and deriving exact quadratures for resulting integrals. Extensions to scattered, multidimensional, and curved geometry data are considered.

Sur la reconstruction d'un objet à partir de ses traces

La première partie traite le cas des variables réelles: à partir d'un objet du plan f(x,y), on envisage pour a donné l'intégrale \int{f(ay+b,y)dy} comme une fonction f_a(b); le problème posé est de reconstruire f à partir des diverses fonctions f_a(b). La solution est donnée par le théorème de rétroprojection filtrée. On donne une idée du rapport de ce problème avec l'imagerie médicale, en donnant une justification heuristique de la loi de Beer. La deuxième partie traite le cas des variables complexes: on se place au-dessus d'un domaine D\subset C, sur un domaine D\times C, avec la projection canonique \pi:D\times C\to D. Alors, on consid\`ere une courbe analytique Y\subset D\times C, telle que la restriction de \pi \`a Y soit propre, et une fonction f(P) sur Y. On définit ses traces u_k(x)=\sum_{i=1}^d{f(x,y_i)y_i^k}. Si la fonction f est holomorphe, ses traces sont holomorphes. Alors on montre:

Théorème. Si les u_k se prolongent holomorphiquement à un domaine \tilde D, alors: Y se prolonge à une courbe analytique \tilde Y\subset \tilde D \times C, et f se prolonge à une fonction holomorphe \tilde f sur \tilde Y. On voit le rapport de ce problème avec celui de la reconstruction: sous certaines conditions, on pourrait, étant donné une famille de traces u_k, reconstruire un objet ayant ces traces (cet objet serait uniquement défini).

Systèmes paramétrés et applications

  • Guillaume Moroz, Maplesoft
  • Friday, Feb. 12, 11am, room A006

Les systèmes paramétrés apparaissent dans de nombreux problèmes applicatifs. Un enjeu majeur est la description du nombre de solutions d'un système en fonction de ses paramètres. Après une présentation rapide des outils de calcul formel permettant d'aborder un tel problème efficacement, je présenterai différentes applications. Je m'attarderai plus particulièrement sur un problème issu de robotique nécessitant l'étude des singularités et des points triples d'un système paramétré.

Multiple a contrario random sample consensus

  • Julien Rabin, Paris-Dauphine
  • Thursday, Feb. 11, 2pm, room C103

Many applications in the field of computer vision (3D reconstruction, motion segmentation, object recognition, ...) rely on the fitting of a geometric model to data. This is generally achieved by matching control points between pairs of images, and then using a simultaneous grouping and fitting algorithm, such as the Hough transform or the RANSAC algorithm. RANSAC being dedicated to the detection of a single transformation, a few variants of RANSAC have been proposed to extend its use for multiple detection purposes, requiring strong priors (model definition, error distribution, ...). We propose a new algorithm for simultaneous model selection and multiple detection, using an a contrario framework which is inspired from the previous work by Moisan and Stival.

Small-Size Epsilon-Nets for Geometric Range Spaces

  • Esther Ezra , Courant NYU
  • Tuesday Dec. 15, 11am, room A006

Since their introduction in 1987 by Haussler and Welzl, \eps-nets have become one of the central concepts in computational and combinatorial geometry, and have been used in a variety of applications, such as range searching, geometric partitions, and bounds on curve-point incidences. In particular, they are strongly related to geometric set-cover and hitting-set problems. A range space (or a hypergraph) (X,R) is a pair consisting of an underlying universe X of objects, and a certain collection R of subsets of X (also called ranges). Given a range space (X,R), and a parameter 0 < \epsilon < 1, an \epsilon-net for (X,R) is a subset N of X with the property that any range that captures at least \epsilon-fraction of X contains an element of N. In other words, N is a hitting set for all the ``heavy'' ranges. Of particular interest are geometric range spaces, since then they admit small-size \epsilon-nets. Specifically, the Epsilon-Net Theorem of Haussler and Welzl asserts that in this case there exists an $\epsilon$-net of size O(1/\epsilon \log{1/\epsilon}). One of the major questions in the theory of \epsilon-nets, open since their introduction more than 20 years ago, is whether the factor \log{1/\epsilon} in the upper bound on their size is really necessary, especially in typical low-dimensional geometric situations. A central motivation then arises from the technique of Bronnimann and Goodrich to obtain, in polynomial time, improved approximation factors for the geometric hitting-set and set-cover problems: The existence of an \epsilon-net of size O((1/\epsilon) f(1/\epsilon)), for some slowly-growing function f(.), implies an approximation factor of O(f(OPT)), where OPT is the size of the smallest such set. In this talk I will survey some of the fundamental results concerning small-size \epsilon-nets. I will then discuss range spaces of points and axis-parallel boxes in two and three dimensions, and show that they admit an \epsilon-net of size O(1/\epsilon \log\log{1/\epsilon}).

Continuity Mapping for Multi-Chart Textures

  • Gustavo Ariel Patow, University of Girona
  • Tuesday Nov. 16, 10am, room C103

This talk will consist of two parts: First of all, I'll give you a short overview about the Geometry and Graphics Group at the University of Girona, Spain Then I'll talk about Continuity Mapping for Multi-Chart Textures, one of our most recent works: It is well known that multi-chart parameterizations introduce seams over meshes, causing serious problems for applications like texture filtering, relief mapping and simulations in the texture domain. We present two techniques, collectively known as Continuity Mapping, that together make any multi-chart parameterization seamless: Traveler’s Map is used for solving the spatial discontinuities of multi-chart parameterizations in texture space thanks to a bidirectional mapping between areas outside the charts and the corresponding areas inside; and Sewing the Seams addresses the sampling mismatch at chart boundaries using a set of stitching triangles that are not true geometry, but merely evaluated on a perfragment basis to perform consistent linear interpolation between non-adjacent texel values. Continuity Mapping does not require any modification of the artist-provided textures or models, it is fully automatic, and achieves continuity with small memory and computational costs.

Exact Delaunay graph of smooth convex pseudo-circles: General predicates, and implementation for ellipses

  • George Tzoumas, Vegas
  • Friday, Oct. 22, 11am, room A006

We examine the problem of computing exactly the Delaunay graph (and the dual Voronoi diagram) of a set of, possibly intersecting, smooth convex pseudo-circles in the Euclidean plane, given in parametric form. Pseudo-circles are (convex) sites, every pair of which has at most two intersecting points. The Delaunay graph is constructed incrementally. We propose robust end efficient algorithms for all required predicates, under the exact computation paradigm. We focus on InCircle, which is the hardest predicate, and express it by a sparse 5x5 polynomial system, which allows for an efficient implementation by means of successive Sylvester resultants and a factorization lemma. A customized subdivision-based algorithm with quadratic convergence is combined as a form of filtering, allowing us to answer the predicate in real-time, for the non-degenerate cases. Finally, we present a CGAL-based C++ implementation for the case of ellipses. Our experiments show that ellipses may outperform polygonal approximations of more than 15 edges per ellipse.

On the Topology of Planar Algebraic Curves

  • Luis Penaranda, Vegas
  • Friday, June 26, 11am, room C103

We revisit the problem of computing the topology and geometry of a real algebraic plane curve. The topology is of prime interest but geometric information, such as the position of singular and critical points, is also relevant. A challenge is to compute efficiently this information for the given coordinate system even if the curve is not in generic position.

Previous methods based on the cylindrical algebraic decomposition (CAD) use sub-resultant sequences and computations with polynomials with algebraic coefficients. A novelty of our approach is to replace these tools by Gröbner basis computations and isolation with rational univariate representations. This has the advantage of avoiding computations with polynomials with algebraic coefficients, even in non-generic positions. Our algorithm isolates critical points in boxes and computes a decomposition of the plane by rectangular boxes. This decomposition also induces a new approach for computing an arrangement of polylines isotopic to the input curve. We also present an analysis of the complexity of our algorithm. An implementation of our algorithm demonstrates its efficiency, in particular on high-degree non-generic curves.

This is a joint work with Jinsan Cheng, Marc Pouget, Sylvain Lazard, Fabrice Rouillier and Elias Tsigaridas.

Triangulating the 3D periodic space

  • Manuel Caroli, Inria Sophia, Geometrica
  • Wednesday, June 24, 2pm, room B011

This work is motivated by the need for software computing 3D periodic triangulations in numerous domains including astronomy, material engineering, biomedical computing, fluid dynamics etc. We design an algorithmic test to check whether a partition of the 3D flat torus actually forms a triangulation (which subsumes that it is a simplicial complex). We propose an incremental algorithm that computes the Delaunay triangulation of a set of points in the 3D flat torus without duplicating any point, whenever possible. To the best of our knowledge, this is the first algorithm of this kind whose output is provably correct. Proved algorithms found in the literature are in fact always computing with 27 copies of the input points, while our algorithmic test detects when such a duplication can be avoided, which is usually possible in practical situations. The algorithm was implemented and has been accepted by the CGAL Editorial Board as a new package of CGAL 3.5.

An algorithm that is linear in time and space and worst case for detecting intersecting paths.

  • Srecko Brlek, Université du Québec à Montréal
  • Thursday, June, 18th, 11 am, room to be announced

For discrete sets coded by words describing their contour, several linear algorithms have been designed for determining their shape properties. Most of them are based on the assumption that the boundary is a discrete curve which is closed. However, checking if a path is closed requires to check if an element in a set appears twice, requires an extra log N factor, with either sorting or hash coding (in worst case). This work removes a drawback by allowing, both in linear time and space, to determine whether a given path forms the contour of a discrete figure. This is achieved by using a radix tree structure over a quadtree, where the nodes are the visited grid points, enriched with neighborhood links that are essential for obtaining linearity.

(this a joint work with M. Koskas and X. Provençal)

On pavings by plane translation.

  • Srecko Brlek, Université du Québec à Montréal
  • Wednesday, June, 17th, room to be announced

We consider the problem of tiling the plane with translated copies of one single tile on the square lattice. Such tiles have been characterized by Beauquier and Nivat: T tiles the plane if and only if the word w(T) which codes its boundary satisfies the equation: w(T) = A.B.C \hat{A}\hat{B}\hat{C}. where "\hat" is interpreted as reading the word backwards. We have designed linear algorithms to detect such tiles based on nice combinatorial properties on words. It turns out that a sub-class of these tiles called "squares" is remarkable: not only do they tile the plane in two distinct ways, but also show unexpected connections to famous sequences such as the ubiquitous Fibonacci sequence and also the Pell sequence. (this a joint work with X. Provençal first and then A. Blondin Massé, A. Garon, S. Labbé and M. Mendès France )

Variational Harmonic Maps for Space Deformation


  • Mirela Ben Chen, Technion
  • May, Monday 25, 2:00 pm, Room C005 ("salle du conseil")

A space deformation is a mapping from a source region to a target region within Euclidean space, which best satisfies some user specified constraints. It can be used to deform shapes embedded in the ambient space and represented in various forms – polygon meshes, point clouds or volumetric data. For a space deformation method to be useful, it should possess some natural properties: e.g. detail preservation, smoothness and intuitive control. A harmonic map from a domain Ω in Rd to Rd is a mapping whose d components are harmonic functions. Harmonic mappings are smooth and regular, and if their components are coupled in some special way, the mapping can be detail-preserving, making it a natural choice for space deformation applications. The challenge is to find a harmonic mapping of the domain, which will satisfy constraints specified by the user, yet also be detail-preserving, and intuitive to control. We generate harmonic mappings as a linear combination of a set of harmonic basis functions, which have a closed-form expression when the source region boundary is piecewise linear. This is done by defining an energy functional of the mapping, and minimizing it within the linear span of these basis functions. The resulting mapping is harmonic, and a natural "As-Rigid-As-Possible" deformation of the source region. Unlike other space deformation methods, our approach does not require an explicit discretization of the domain. It is shown to be much more efficient, yet generate comparable deformations to state-of-the- art methods. We describe an optimization algorithm to minimize the deformation energy, which is robust, provably convergent, and easy to implement.

Topologie algorithmique des courbes sur les surfaces

  • Éric Colin de Verdière, Chargé de recherche CNRS à l'ENS Paris
  • Wednesday May 13, 11:00am, room C05 (salle du conseil)

Étant donnée une surface « compliquée » (homéomorphe à une sphère à laquelle on accole une ou plusieurs poignées), comment la découper pour la rendre planaire (homéomorphe à un disque) ? Comment trouver la plus courte courbe fermée qu'on ne peut pas contracter en un point en restant sur la surface ? Comment raccourcir autant que possible une courbe en la faisant glisser sur une surface donnée (à homotopie près) ?

Le but de l'exposé est de présenter plusieurs résultats et techniques algorithmiques développés récemment par différents auteurs pour résoudre ce type de questions, certaines étant motivées par des applications. Par exemple, découper une surface constitue une étape souvent nécessaire en infographie (pour y plaquer une texture) ou en modélisation géométrique (pour la paramétrer par un domaine planaire).

Ensemble de consensus optimal pour ajuster une droite ou un plan discret

  • Rita Zrour, ESIEE
  • April 21, 11:00am, B013

Le but de ce travail est de présenter une méthode d'ajustement d'une droite/plan discret à un nuage de points 2D/3D en présence de points aberrants (outliers). Une droite/plan discret est défini en géométrie discrète par un ensemble de points discrets dans l'espace 2D/3D localisés entre deux droites/plans euclidiens parallèles séparés par une distance fixe. Notre objectif est alors l'ajustement des paramètres de ces droites/plans euclidiens de telle manière qu'ils incluent le plus grand nombre d'éléments possibles de l'ensemble initial de points discrets. Notre modèle discret nous permet d'examiner tout l'ensemble de consensus possible afin de garantir l'optimalité et l'exactitude de la solution. L'ajustement est effectué avec une complexité de O(N2log N) en 2D et de O(N3log N) en 3D, N étant le nombre de points. Cette complexité est proche des méthodes de régressions statistiques robustes (Least Median of Squares).

Construction de la triangulation de Delaunay de segments par un algorithme de flip

  • Mathieu Brévilliers, GIPSA
  • April 21, 10:30am, C103

Etant donné un ensemble S de segments disjoints dans le plan, que nous appelons des sites, une triangulation de segments de S est une partition de l'enveloppe convexe de S en sites, arêtes et faces. L'ensemble des faces est un ensemble maximal de triangles ouverts disjoints tels que les sommets de chaque triangle sont sur trois sites distincts et tels que leurs côtés ouverts ne coupent pas S. Les arêtes d'une triangulation de segments sont les composantes connexes (éventuellement de dimension deux) de l'enveloppe convexe de S privée des sites et des faces de la triangulation. La triangulation de Delaunay de segments de S est celle dont les faces sont inscriptibles dans des cercles vides, c'est-à-dire des cercles dont les intérieurs ne coupent pas S. Cette triangulation est duale du diagramme de Voronoï de segments.

L'objectif de cet exposé est de donner un algorithme qui transforme, par une suite finie d'améliorations locales, toute triangulation de segments donnée en une triangulation de segments qui a la même topologie que celle de Delaunay. Cet algorithme est une généralisation de l'algorithme dit « de flip » qui transforme une triangulation d'un ensemble de points en triangulation de Delaunay. La principale différence avec l'algorithme de flip classique est que les améliorations locales doivent être calculées sur des régions non convexes. Nous surmontons cette difficulté en utilisant des fonctions localement convexes.

Topological and geometrical cutting of surfaces

Cutting a surface is an usual requirement for several applications involving surface manipulation, e.g. in the context of computer graphics, medical imaging or geology. We propose here an original framework that describes the notions of M-tiles and M-tilings, modeling cuttings as overall structures on the surface. Using this framework, we first describe a general method to build a M-tiling on a given surface with a single M-tile homeomorphic to a disc. We propose to introduce some length constraints producing an optimal topological cutting. We then inject some geometrical and application-driven constraints to handle the specificities of the surface, allowing the computation of low-distorted fattenings. Finally we describe a cutting method that produces an M-tiling by quadrangles of any non-spherical surface. Such a tiling is in particular required to produce a parameterization of the surface. In this case, we resolve the constraints generated on the structure of the tiling by using an initial topological cutting in a single tile.

Suivi de Tumeur et Simulateur de Biopsie du Foie avec Prise en Compte de la Respiration

La respiration est un processus faisant principalement intervenir deux muscles : le diaphragme et les muscles intercostaux. Ces mouvements ainsi que la physiologie des organes intervenants nécessitent une modélisation informatique complexe pour de nombreux contextes. Nous nous intéresserons ici à deux applications : la radiothérapie et l’imagerie interventionnelle.

En radiothérapie, le mouvement pulmonaire rend le traitement assez difficile. La méthode actuelle consiste à appliquer des marges de sécurité pour prendre en compte l’incertitude sur la position des tumeurs. Notre solution consiste à utiliser un modèle biomécanique suivant les lois de la mécanique des milieux continus et spécifique à un patient. Je parlerai de la méthode de maillage, de la définition des conditions limites liées au comportement réel des organes, du modèle de déformation ainsi que sa résolution par élément finis. Finalement, je montrerai les résultats validant notre approche. Une étude a par ailleurs été menée sur la conversion des données prédites du modèle en scanner 4D afin de préparer la dosimétrie.

Les simulateurs chirurgicaux actuels ne reproduisent pas correctement la non reproductivité du mouvement respiratoire. Je présenterai le simulateur de radiologie interventionnelle sur les viscères que nous avons développé. Les mouvements dus à la respiration, et plus particulièrement au mouvement du diaphragme, jouent un rôle important sur les déformations des viscères. Le simulateur utilise la technique du Chainmail pour calculer les mouvements respiratoires en temps réel dans un environnement de réalité virtuelle. Je détaillerai aussi l’interaction haptique et montrerai des prototypes de simulations de biopsie de foie sous ultra-sons ou encore l’insertion d’aiguille dans la vésicule biliaire sous fluoroscopie.

Discrete curvature for grain segmentation and application to tomographic snow images

Deposited snow on the ground consists of ice grains that have been welded by thermo-dynamical and mechanical constraints. To analyse the snow microstructure and to understand its formation and its time evolution, it is often required to numerically separate these ice grains from their neighbours. Commonly used three-dimensional imaging methods do not provide any direct information on the boundaries between grains. To obtain an accurate segmentation, we developed a geometrical method that is based on the detection of concavities on the ice surface. On the three-dimensional image, it is possible to compute, for each point of the surface, the Gaussian and mean curvatures, which, when combined, provide the sign of the smallest principal curvature. The surface of the object can then be separated into two regions: one positive and the other one negative. By using a Voronoi diagram, regions are extended to the whole object. The volume elements of the negative region are then removed, providing a set of disconnected objects. These objects are then used as seeds for a second Voronoi diagram, allowing to obtain the desired segmentation.

Segmentation en grains avec la courbure discrète et application à des images tomographiques de neige

La neige déposée au sol est constituée de grains de glace soudés entre eux par des sollicitations thermodynamiques et mécaniques. Pour analyser la microstructure de la neige et en comprendre sa formation et son évolution, il est souvent nécessaire de séparer numériquement ces grains de glace de leurs voisins. Les méthodes d'imagerie tridimensionnelles communément utilisées ne fournissent aucune information directe sur les frontières entre les grains. Afin d'obtenir une segmentation pertinente, nous avons mis au point une méthode géométrique basée sur la détection de concavités sur la surface de glace. Sur l'image tridimensionnelle de neige, il est possible de calculer, en chaque point de la surface, les courbures gaussienne et moyenne. Combinées, ces courbures fournissent le signe de la courbure principale la plus petite. Il est alors possible de séparer la surface en deux zones : l'une positive, l'autre négative. En utilisant un diagramme de Voronoï, ces zones sont étendues à l'objet entier. Les voxels de la zone négative sont retirés de l'image, fournissant ainsi une segmentation en objets déconnectés. Ces objets sont alors utilisés comme germes pour un second diagramme de Voronoï, permettant d'obtenir la segmentation désirée.

What is a camera ?

  • Jean Ponce, LIENS, ENS/INRIA/CNRS UMR 8548, Ecole Normale Supérieure
  • Mar. 03, 02:00 pm, room C 103

This talk addresses the problem of characterizing a general class of cameras under reasonable, ``linear'' assumptions. Concretely, we use the formalism and terminology of classical projective geometry to model cameras by two-parameter linear families of straight lines---that is, degenerate reguli (rank-3 families) and non-degenerate linear congruences (rank-4 families). This model captures both the general linear cameras of Yu and McMillan (2004) and the linear oblique cameras of Pajdla (2002). From a geometric perspective, it affords a simple classification of all possible camera configurations. From an analytical viewpoint, it also provides a simple and unified methodology for deriving general formulas for projection and inverse projection, triangulation, and binocular and trinocular geometry.

Parametrically Morphable Human Body Models Based on 3D Scan Data

  • Hyewon Seo, IGG team - Strasbourg
  • Feb. 27th, 10:00 am, room A008

We discuss a set of techniques based on examples for generating realistic, controllable human whole-body models. Users are assisted in automatically generating a new model or modifying an existing one by controlling the parameters provided. Our approach is based on examples and consists of three major parts. First, each example from the 3D range scanner is preprocessed so that the topology of all examples is identical. Second, the system that we call the modeling synthesizer learns from these examples the correlation between the annotated attributes and the body geometry. After this learning process the synthesizer is devoted to the generation of appropriate shape and proportion of the body geometry through interpolation. Finally, we demonstrate our modifier synthesizer for more subtle manipulations of example models, using high-level control parameters such as fat percentage. On any synthesized model, the underlying bone and skin structure is properly adjusted, so that the model remains completely animatable using the joint animation. By allowing automatic modification from a set of parameters, our approach may eventually lead to the automatic generation of a variety of population models.

Practical construction of the Delaunay graph

We describe a new implementation of the well-known incremental algorithm for the construction of Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available codes for convex hulls and Delaunay triangulations, and that it can be used for quite big input sets in spaces of dimensions up to 6. To circumvent prohibitive memory usage, we also propose a modification of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modified algorithm performs only 5 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.

Computing the Similarity of Geometric Graph: Ideas and Experiments

A geometric graph is a graph whose vertices are a set of points in the plane, and which is drawn using straight edges connecting these vertices. What does it mean for two geometric graphs to be similar?

It is not easy to find a satisfactory definition of similarity. We show a definition based on the graph edit distance, and show that it is a metric. It can be computed by Integer Linear Programming, but is unfortunately NP-hard to compute in general. We therefore give a heuristic distance that is easy to compute and seems to give useful results in practice. We present experiments done on thousands of graphs created from glyphs of Chinese characters.

Topologically-based Geometric Modelling

Abstract: In geometric modeling, improving both the representation and the manipulation of geometric objects is vital to analysing, editing and interacting with these objects. A classical mode of representation consists in decomposing the object into cells of various dimension (vertices, edges, facets) connected by adjacency relations. In the frame of modeling applications, or geometry processing, where both flexibility and efficiency are important, combinatorial maps are a reference. Their mathematical origin allow for the definition of solid, provably corrects and dimension independent definitions. In this seminar, we will introduce various topologically-based representations, as well as some embedding models, such as subdivision surfaces and Gregory-Bezier patchs.

Intuitive modeling of vapourish objects

  • Dmitry Sokolov (ALICE)
  • Room A006, Sept.,Thursday 25, 10:00 am

Abstract: Attempts to model gases in computer graphics started in the late 1970s. Since that time, there have been many approaches. Unfortunately, many of them may be over-complicated for modelers and animators, who did not graduate in physics or mathematics. Here we present a new intuitive method designed to create vapourish objects like clouds or smoky characters. Even though the method is based on fractal geometry, a modeler should not be scared off by this term, since no special education in mathematics is needed to him. The idea is to create few sketches that will generate the form of the final vapourish object. The advantages of the new method are: simplicity, perfect control of results, easiness to animate the objects.

Quadrilateral/ hexahedral meshing by L_\infty centroidal Voronoi Tessellation

  • Yang Liu (ALICE)
  • Room B11, Sept., Friday 19, 2:30 pm

Abstract: Quadrilateral/ hexahedral elements have more advantages than triangles in many applications, such as FEM, CFD. We are investigating automatic quad/hex meshing methods. The core idea is generalizing the concept of ``centroidal Voronoi tessellation'' which have shown their strength on isotropic and ansiotropic triangular meshing. In this talk, I will present the basic idea and initial results based on our L_\infty centroidal Voronoi tessellation.

3D Visibility (old and new)

  • Xavier Goaoc (VEGAS)
  • Room B100, Sept., Friday 11, 2:00pm
  • Abstract:

The theme of 3D visibility touches upon a number of fields including computer graphics (hidden surface removal, illumination...), computer vision (shape recognition, aspect graphs...) and computational geometry (visibility graphs, visibility complexes...). I will give an informal presentation of some recent results obtained in the Vegas project, hoping to stimulate discussion. The main topics are the use of visual event surfaces for computing shadow boundaries and methodsfor approximating visibility with an error margin fixed a priori.

Design by N.Design Studio, adapted by solidGone.org (version 1.0.0)
Powered by pmwiki-2.1.27