Monday, August 17, 2009

Pick Me, Rotate Me!

There were two nasty issues in GEF3D. Actually there was no bug report on the first one, as it was no real bug but a conceptional problem we had with picking (that is selecting a 3D object in a scene with the mouse). The other one caused many strange behaviours, documented in several bug reports (280977, 262529,
263441, 278634). I'm proud to say that we (Kristian, who did the main work -- great work, Kristian! --, and I, who helped out when Kristian couldn't see the forest for the trees ;-) ) solved both issues. This is a big step toward a final release, as GEF3D now supports almost everything necessary to 3D-fy existing editors and use them in a 3D environment with extra features such as inter-diagram connections! Unfortunately, both solutions made some API changes necessary, but in the end it makes it even easier to 3D-fy existing 2D editors.


The first issue was about picking. We used color picking because I initially thought that this would be much easier and faster. In layman's terms, color picking simply means to render a scene twice: one time as it is shown on the screen, and a second time with a single unique color for each pickable object. Picking an object is as simple as getting the color at the mouse position and looking up the object with that color in a map. The advantage of this technique is that you do not need to implement any geometric calculations and that it is pretty fast and accurate.

Unfortunately it was really hard to use GEF's findFigureAt methods with this technique, because GEF allows to add a filter to that method in order to ignore certain figures. The problem with color picking is that it is impossible to find a figure that is located "behind" an ignored figure, unless the color buffer is rendered again without the ignored figure. We used some cumbersome tricks to work around that problem, but in the end we had to find a new solution.

So we decided to remove color picking and implement geometric picking. That is, we have added methods to all objects shown in a 3D scene which can calculate the intersection with a ray. In order to improve performance, every figure can now calculate its paraxial bounding box. A paraxial bounding box is simply a cuboid whose edges are parallel to the world coordinate system axes and that the figure itself and all its children (see Fig. 1). If a ray hits the paraxial bounding box, the boxes of the children are searched and so on. Finally, we determine whether the shape of a figure intersects with the ray. This way, we can fully support GEF's findFigureAt with filters (in GEF, a TreeSearch is used for filtering). As a side effect, rendering shapes is much easier, since they have to be rendered only once -- no color buffer has to be rendered any more. The trade off is that every shape now has to implement a new interface Pickable, which means it must be able to calculate the intersection of a ray with its shape. But don't worry: We provide a bunch of 3D shapes which implement this interface, and you can use these shapes to compose your own shapes. Also, we plan to provide more shapes in the future, so that in most cases you do not have to deal with these 3D issues at all.

Figure 1: Paraxial bounding boxes, rendered in Draw3D debug mode (use preferences to toggle debug mode)
Coordinate Systems and Rotation

The second issue was about coordinate systems. The problem with coordinate systems in 3D (and sometimes also in 2D) is that there are several coordinate systems, and frankly, in the end we lost control of where which coordinate system was used within GEF3D. Due to these coordinate problems rotated figures caused a lot of problems. Thus we studied and refactored the code dealing with coordinate systems. Kristian has written a wiki article about that, explaining our solution in detail. In summary, GEF3D works with three coordinate systems:

  1. a world coordinate system with absolute 3D coordinates
  2. mouse coordinates, that are 2D coordinates relative to the canvas
  3. surface coordinates, that are 3D coordinates relative to the surface of a 3D figure. You will however notice that GEF3D often uses 2D surface coordinates, which are the projection of the X and Y component onto the plane Z=0.

2D content is projected on surfaces, and since 2D code is handling the 2D content, GEF3D needs to hide all 3D related issues from figures (and their controllers) on surfaces.
To cut a long story short, we were able to implement surfaces completely like sandboxes for 2D content. This is not an entirely new concept, as it was already possible to 3D-fy 2D content, but surfaces make that even simpler. As a result, all tools of an embedded 2D editor are now working out of the box, that is new elements can be created, moved or resized without the need to change these tools. In earlier versions, we had some problems with rotation, but even that problem is solved now. As you can see in Figure 2, you can simply move a 2D figure from one surface (1, 2) to another, the feedback figure adjust to the current surface (3,4). Of course, in order to actually move the model (of the figure), the editor has to support that kind of operation. As you can also see, the connection between two 2D nodes on different rotated surfaces is correctly located!
Figure 2: Moving a 2D figure from one surface to another.
Changed Rendering Strategy and Shape Library

Another issue not mentioned above is transparency. The problem is that OpenGL supports translucent colors, but no real transparency. That is, an object may have a translucent color, but the object is handeled by OpenGL just like an opaque object. That is, an object "behind" an transparent object is not painted if the transparent object is rendered before the object behind it. If the transparent object is rendered after the object behind it, then the colors are blended and the object is rendered transparently.

In order to implement "real" transparency with OpenGL, the programmer has to ensure that (transparent) objects are rendered after opaque objects in correct order, that is from back to front. In GEF3D, 3D figures were rendered by (recursively) calling their render method. If a figure had to be rendered tranparently, it created a temporary TransparentObject, causing the render engine of GEF3D to render the transparent objects after all opaque figures had been rendered (i.e. in a second render pass).

Unfortunately a figure may be composed of other figures, which causes a big problem. If a child figure is transparent as well, its transparency property was only be recognized when it was rendered, that is, when its (transparent) parent figure has been rendered in the second render pass. Then it was impossible to add the (transparent) child into the sorted list of transparent objects correctly, as figures of this transparent object list may have been rendered already. In order to solve that problem, we changed the overal rendering strategy in GEF3D:

Instead of (recursively) calling the render method of each 3D figure, a newly introduced method Renderable.collectRenderFragments(RenderContext renderContext) is called (recursively). Instead of rendering the figure, the figure adds so called RenderFragments to the render context via RenderContext.addRenderFragment(RenderFragment i_fragment). When all render fragments are collected, the render context firstly renders all opaque fragments, then sorts and renders all transparent fragments, and finally renders all superimposed fragments (that are objects to be rendered after anything else, e.g., in case of feedback figures). If you acutally have implemented some render code in your figure, you will have to implement the interface RenderFragment and add implement collectRenderFragments:

public void collectRenderFragments(RenderContext renderContext) {
return this;

In order to avoid the initial problem, RenderFragments are defined as leafs, that is they are not allowed to be composed of figures. The old interface TransparentObject has been removed.

Fortunately, in most cases you do not have to bother about these things, as you could use shapes as explained in the next section!


Due to these changes, and especially because of the need to calculate the intersection of a figre with a (pick) ray, it becomes more interesting to use predefined shapes. Currently, Draw3D includes cuboids, spheres, cylinders, cones, truncated cones and polylines as primitives. If you look at the shapes package (org.eclipse.draw3d.shapes), you will notice that some of the shapes come in two flavours, for example there is a CuboidShape and a CuboidFigureShape. A CuboidFigureShape actually wraps a CuboidShape and adds some convenient functionality in that it automatically sets some graphical properties of the CuboidShape. For example, it sets the CuboidShape's fill and outline colors to the foreground and background color of the figure to which the CuboidFigureShape is linked. This makes it very easy to create a cuboid shape that represents a figure. Just create a CuboidFigureShape and pass the figure to the constructor and everything else is handled for your.

We plan on adding such convenience wrappers for other shapes as well as soon as they become neccessary. If you need such a wrapper, let us now (by writing a post to the GEF3D newsgroup).

In order to use a shape, the figure has to create it somewhere and then add the shape (which implements both, the RenderFragment and Pickable interfaces) in collectRenderFragments to the render context. Since this technique is used rather often, a new figure called ShapeFigure3D has been introduced. All you need to do is to simply implement its abstract method createShape() and create the shape there, e.g.,

protected Shape createShape() {
return new CuboidFigureShape(this);

Note that ShapeFigure3D does not only use the shape for rendering, but for calculating the distance (Figure3D#getDistance(Query)) and the paraxial bounding box (getParaxialBoundingBox(ParaxialBoundingBox)) as well! You may have a look at this figure if you want to implement you own figure from scratch!

General API Changes

The following list shows the changes which were necessary in order to adjust an editor created with an elder version of GEF3D to the latest revision (rev. 295):

  1. Changes usually found in your graphical editor:
    1. GraphicalViewer3DImpl no longer implements IScene. Instead, LightweightSystem3D now implements IScene, so if you need the IScene, simple replace the viewer with viewer.getLightweightSystem3D()
    2. LightweightSystem3D.addRendererListener(RenderListener i_listener) was renamed to move to IScene.addSceneListener(ISceneListener i_listener)
    3. In order to show correct 3D handles and feedback figures, 2D edit parts are to be modified. This was already the case in earlier versions, it has become easier now and most cases of feedback creation are implemented. Some new policies have to be installed as follows:
      • Create a node: ShowLayoutFeedbackEditPolicy3D (see Fig. 3, (1) and (2))
      • Create a connection: ShowSourceFeedback3DEditPolicy (see Fig. 3, (3) and (4))
      • Move or resize a node: Handles3DEditPolicy, to be installed at parent edit part (e.g., diagram) (see Fig. 3, (5) and (6))
      Figure 3: Feedback in 2.5D and 3D, simply by adding some policies.

      Only selecting a connection is not implemented yet, we will fix that as soon as possible.

      The best way to install these policies is by using the borg factory pattern, e.g.,
      EditPartFactory originalFactory = getGraphicalViewer().getEditPartFactory();
      BorgEditPartFactory borgFactory = new BorgEditPartFactory(originalFactory);
      // replace diagram edit part
      borgFactory.addAssimilator(new EditPartReplacer(GraphEditPart.class,
      // modify diagram edit part's policies 
      borgFactory.addAssimilator(new AbstractPolicyModifier() {
      public boolean match(EditPart part) {
      return part instanceof DiagramEditPart3D;
      public void modifyPolicies(EditPart io_editpart) {
      // feedback when creating a node:
      new ShowLayoutFeedbackEditPolicy3D());
      // handles and feedback when moving or resizing a node
      new Handles3DEditPolicy());
      // modify node edit part's policies
      borgFactory.addAssimilator(new IAssimilator.InstanceOf(
      NodeEditPart.class) {
      public EditPart assimilate(EditPart io_editpart) {
      // feedback when drawing a connection
      new ShowSourceFeedback3DEditPolicy());
      return io_editpart;

  2. Changes in figures / edit parts: Often, diagram figures in 3D are simply figures displaying a cube. We provide a cube shape which can be used by the figure. Since using shapes in a figure is a very common case, a new ShapeFigure3D has been introduced. Here is an example of how to use that in combination of a diagram figure which provides a surface as well. In earlier versions, surfaces were implicitly provided by each 3D figure, now you have to explicitly provide a surface. A surface is only needed if 2D content is to be projects onto the 3D figure.
    public class GraphFigure3D extends ShapeFigure3D {
    private ISurface m_surface = new FigureSurface(this);
    public GraphFigure3D() {
    SurfaceLayout.setDelegate(this, new FreeformLayout());
    getPosition3D().setSize3D(new Vector3fImpl(400,300,20));
    setAlpha((byte) 0x44);
    public ISurface getSurface() {
    return m_surface;
    * {@inheritDoc}
    * @see org.eclipse.draw3d.ShapeFigure3D#createShape()
    protected Shape createShape() {
    return new CuboidFigureShape(this);

We have adjusted all examples (the 3D ecore editor, 3D UML editor, and 3D graph editor), so you may have a look at these editors.

I have written a tutorial article explaining the basic concepts of GEF3D, it will be published in the next issue of the german journal Eclipse Magazin. An english version of this article will be made available as soon as possible. In this first article, a sample GEF editor is 3D-fied. We have planned to write a follow-up article explaining how to 3D-fy GMF based editors as well.


No comments:

Post a Comment