Better control over positions and sizes

A new update is now available, introducing seasons and more!
  • Currently, positioning and sizing UI elements is done using relativePosition and relativeSize arguments. Such as in

    Java
    GuiPanel panel = new GuiPanel();
    panel.setPosition(x, y, relativeSize);
    panel.setSize(width, height, relativeSize);

    The issue with this is that it is impossible to individually set the x, y, width or height individually as relative.


    So I propose a significant, but reasonably back compatible change in the API. For example :


    The idea behind setPosition is so that the method would return this and allow for method chaining. E.g. element.setPosition(0, 0).setSize(1f, 10) (i.e. position at pixel 0,0, and give a size of 100% width by 10 pixels).



    Note: with this implementation, the constructor may remain empty (i.e. the current 6 argument one may be marked as deprecated and would call either setPosition(float, float) or setPosition(int, int))


    Why?
    The idea is that all relative positions and sizes are between 0 and 1 (floating values) and all absolute positions (i.e. pixel positions) are between 0 and some arbitrary maximum values (integer values because there is no such thing as a sub-pixel). Therefore, having an API that can allow such flexibility will help creating better user interfaces.


    Consequently, isRelativePosition should be replaced with isRelativePositionX and isRelativePositionY, etc. And with these values, it does not matter if getPositionX returns a floating numerical value, because we can know whether these values are relative of absolute. To remain back compatible, isRelativePosition should return true if any x or y are relative, but should be marked as deprecated.

  • This is what I would have expected in a base UI component API for this game (note that I do not have the full implementation code source, so implemented what I managed to get from the decompiler)



    And I would propose a few other methods to access various information about the player's screen. My goal is for this game to be easy for people coming from desktop UI design to actually work with something that is widely used, screen coordinates. Having a better API, with more control over positioning and sizes, plugins can use more responsive layout designs. (For example, changing the size of the fonts, making panels bigger when displaying lists in larger resolution, etc.)


    The argument of keeping a Cartesian coordinate system (vertical axis pointing up, as it is in OpenGL) based on texture mapping is not a strong one. Once the element is vertically positioned using a screen coordinate system (vertical axis pointing down, as it is for nearly all UI frameworks... because this is how humans read, from top to bottom), and it's size set, finding the proper x and y to lay out the texture is trivial. However, laying out a complex UI where elements are arranged from last to first is awkward for the designer. In short, there is no real reason to have a Cartesian coordinate system for UI design, especially when it has no bearing whatsoever on laying out textures during the rendering phase.


    Also, note that the base element does bind with one player, and one player only. Why? because there is no sense in having a hierarchical object on the server (where the UI is declared) whose order is not in sync with the structure on the client (where the UI is rendered). For example, a checkbox component child to a panel does not have the same state on Player1 than on Player2, so why would the parent panel be updated on both players at the same time? If UI elements are properly cached on the client and synchronized with the server, there shouldn't be memory leaks and performance should not be impaired by it whatsoever.



    ** Edit **

    A keen eye will have observed that I did not import PivotPosition, and this is because I added some constants to it, in order to facilitate positioning of some elements :



    Edited once, last by LordFoobar: Added missing PivotPosition enum. ().

  • Good thoughts. Let's see what red's opinion is on this :)

    For example, a checkbox component child to a panel does not have the same state on Player1 than on Player2, so why would the parent panel be updated on both players at the same time? If UI elements are properly cached on the client and synchronized with the server, there shouldn't be memory leaks and performance should not be impaired by it whatsoever.

    What I did to prevent this was to add the whole GUI panel as an attribute (player.addAttribute(panel)) to each player when they log in to the server so this way all players have their own instance of the GUI and when I update/change it in some way for one player it doesn't update for all others.


  • Yeah, the problem is that changing the UI component also means breaking all plugins, this is why I proposed the "uix" (or any other different namespace) name for this implementation. Much like how Java had AWT, then switched to a different framework called Swing later on.


    Of course, I fully understand the extra weight that this adds to the API, but I really think that this is for the best. And the game is still in alpha, let's not forget that. Changes are bound to happen, and better so early on than later.


    Setting player UI to attributes is one thing that gave me a hint of the problem with player UI binding (and how plugins need to manually propagate and sync UI with the player...). :)

  • Sorry for my late response, too much things going on in the past days :D


    So I propose a significant, but reasonably back compatible change in the API. For example : ...setPosition(float x, float y)...setPosition(int x, int y)...

    Basically this sounds like a good idea, but I see two problems coming with this change. On the one hand, I'm afraid that this is more error prone: People might accidentally use float values instead of integers and vice versa. E.g. something like setPosition(10f, 100); would give an unintended result (apparently the user wanted to use absolute coordinates, but accidentally provided a float value). Or another example where things could get mixed up easily: setSize(100 * 0.5f, 500);


    But on the other hand, the biggest issue comes from the result of the new getPositionX() and getPositionY() functions: If I use the position of an element and set it for another element, I expect the element to actually use exactly the same position. For example, this code would not work as intended:


    Java
    //set absolute position for element1
    GuiElement element1 = ...
    element1.setPosition(200, 500);
    //now I want to place element2 10 pixels next to element1
    GuiElement element2 = ...
    element2.setPosition(element1.getPositionX() + 10, element1.getPositionY()); //<- unexpected results


    It's just too confusing that when setting coordinates based on the getPosition() or getSize() functions are always considered as relative coordinates (unless we cast them to int explicitly).


    The idea behind setPosition is so that the method would return this and allow for method chaining. E.g. element.setPosition(0, 0).setSize(1f, 10) (i.e. position at pixel 0,0, and give a size of 100% width by 10 pixels).

    Is chaining really necessary for GUI elements? There are actually some objects out there where chaining is quite helpful in my opinion (e.g. buffers), but I think chaining makes the readability of code (at least for something like this) worse. I don't have the impression that this code (just an example) is well-arranged and readable:
    element.setPosition(0, 0, false).setSize(100, 200, false).setPivotPosition(PivotPosition.Center).setColor(0xFF0000FF).setBorderThickness(2, false).setBorderColor(0x00000000);


    The argument of keeping a Cartesian coordinate system (vertical axis pointing up, as it is in OpenGL) based on texture mapping is not a strong one.

    Basically the main argument is that we want to stick to the OpenGL specification, which considers the vertical axis pointing up. Sticking to specifications is always recommendable in my opinion. But especially when accessing texture coordinates or when using shaders, people have to stick to the OpenGL conventions. If we decide to change the vertical axis, how can we justify that change when people still have to use regular OpenGL coordinates for all other things? For example: If I want to use post-processing effects (which are not yet implemented), the whole screen is rendered to a texture. Now I want to add a certain effect behind a GUI element (e.g. having a transparent panel, and the world behind it should use blurred or inverted colors). Right now I could just use exactly the same coordinates of the GUI element in my shader, but if we change the vertical axis, it would be necessary to flip the y value manually. This isn't intuitive and we really want to keep these things consistent :|


    vertical axis pointing down, as it is for nearly all UI frameworks... because this is how humans read, from top to bottom

    OSX also considers the bottom left corner of the screen as origin, and it looks like most CAD software also uses the bottom left screen corner as origin. Having the origin in the top left corner makes sense for texts, but a GUI usually consists of more than that, especially a GUI for a game.


    For example, a checkbox component child to a panel does not have the same state on Player1 than on Player2, so why would the parent panel be updated on both players at the same time?

    I agree to that, at least when it comes to elements which have a certain state (which depends on the user), e.g. GUITextFields or GUIFileBrowsers. We will think about changing that^^


    A keen eye will have observed that I did not import PivotPosition, and this is because I added some constants to it, in order to facilitate positioning of some elements :

    It is actually our intention to add more pivot positions. This was also suggested by @Miwarre some time ago, but this requires some changes to our internal GUI (which does not yet take these new pivot positions into account). It's on our list, I guess that change will be available in the near future ;)


    Yeah, the problem is that changing the UI component also means breaking all plugins, this is why I proposed the "uix" (or any other different namespace) name for this implementation. Much like how Java had AWT, then switched to a different framework called Swing later on.

    Hmm... so you want to have two types of GUIs in the API? =O I think it's definitely preferable to improve the existing GUI instead of adding another, separate GUI to the API.

Participate now!

Don’t have an account yet? Create a new account now and be part of our community!