Sorry for my late response, too much things going on in the past days 
So I propose a significant, but reasonably back compatible change in the API. For example : ...setPosition(float x, float y)...setPosition(int x, int y)...
Basically this sounds like a good idea, but I see two problems coming with this change. On the one hand, I'm afraid that this is more error prone: People might accidentally use float values instead of integers and vice versa. E.g. something like setPosition(10f, 100); would give an unintended result (apparently the user wanted to use absolute coordinates, but accidentally provided a float value). Or another example where things could get mixed up easily: setSize(100 * 0.5f, 500);
But on the other hand, the biggest issue comes from the result of the new getPositionX() and getPositionY() functions: If I use the position of an element and set it for another element, I expect the element to actually use exactly the same position. For example, this code would not work as intended:
//set absolute position for element1
GuiElement element1 = ...
element1.setPosition(200, 500);
//now I want to place element2 10 pixels next to element1
GuiElement element2 = ...
element2.setPosition(element1.getPositionX() + 10, element1.getPositionY()); //<- unexpected results
It's just too confusing that when setting coordinates based on the getPosition() or getSize() functions are always considered as relative coordinates (unless we cast them to int explicitly).
The idea behind setPosition is so that the method would return this and allow for method chaining. E.g. element.setPosition(0, 0).setSize(1f, 10) (i.e. position at pixel 0,0, and give a size of 100% width by 10 pixels).
Is chaining really necessary for GUI elements? There are actually some objects out there where chaining is quite helpful in my opinion (e.g. buffers), but I think chaining makes the readability of code (at least for something like this) worse. I don't have the impression that this code (just an example) is well-arranged and readable:
element.setPosition(0, 0, false).setSize(100, 200, false).setPivotPosition(PivotPosition.Center).setColor(0xFF0000FF).setBorderThickness(2, false).setBorderColor(0x00000000);
The argument of keeping a Cartesian coordinate system (vertical axis pointing up, as it is in OpenGL) based on texture mapping is not a strong one.
Basically the main argument is that we want to stick to the OpenGL specification, which considers the vertical axis pointing up. Sticking to specifications is always recommendable in my opinion. But especially when accessing texture coordinates or when using shaders, people have to stick to the OpenGL conventions. If we decide to change the vertical axis, how can we justify that change when people still have to use regular OpenGL coordinates for all other things? For example: If I want to use post-processing effects (which are not yet implemented), the whole screen is rendered to a texture. Now I want to add a certain effect behind a GUI element (e.g. having a transparent panel, and the world behind it should use blurred or inverted colors). Right now I could just use exactly the same coordinates of the GUI element in my shader, but if we change the vertical axis, it would be necessary to flip the y value manually. This isn't intuitive and we really want to keep these things consistent 
vertical axis pointing down, as it is for nearly all UI frameworks... because this is how humans read, from top to bottom
OSX also considers the bottom left corner of the screen as origin, and it looks like most CAD software also uses the bottom left screen corner as origin. Having the origin in the top left corner makes sense for texts, but a GUI usually consists of more than that, especially a GUI for a game.
For example, a checkbox component child to a panel does not have the same state on Player1 than on Player2, so why would the parent panel be updated on both players at the same time?
I agree to that, at least when it comes to elements which have a certain state (which depends on the user), e.g. GUITextFields or GUIFileBrowsers. We will think about changing that^^
A keen eye will have observed that I did not import PivotPosition, and this is because I added some constants to it, in order to facilitate positioning of some elements :
It is actually our intention to add more pivot positions. This was also suggested by @Miwarre some time ago, but this requires some changes to our internal GUI (which does not yet take these new pivot positions into account). It's on our list, I guess that change will be available in the near future 
Yeah, the problem is that changing the UI component also means breaking all plugins, this is why I proposed the "uix" (or any other different namespace) name for this implementation. Much like how Java had AWT, then switched to a different framework called Swing later on.
Hmm... so you want to have two types of GUIs in the API?
I think it's definitely preferable to improve the existing GUI instead of adding another, separate GUI to the API.