Lessons from Molecules: OpenGL ES

OpenGL teapot sample

I had a great time at the satellite iPhoneDevCamp in Chicago, where I was surprised with the number of well-known (to me, at least) developers who are located in the area. Although the San Francisco outpost may have had more attendees, I'd argue that we had as good a technical and business discussion going. We certainly had better shirts (courtesy of Stand Alone).

Anyway, I was allowed to give a talk on some of the lessons that I'd learned in implementing the 3-D graphics in Molecules using OpenGL ES. It was well-received, so I figured that I should create a written guide based on my talk.

First, before I get into the details of the post, I'd like to indicate that nothing I say here should be in violation of the still-in-place NDA that Apple has required iPhone developers to sign. OpenGL ES is an open standard, all benchmarks that I point to are either publicly available or were generated by an application that is currently in the App Store, and I will not make any reference to Apple's specific APIs. Molecules is nominally open source, but I cannot share the source code publicly until the NDA is lifted.

Also, I will refer to the iPhone throughout this post, but the iPod Touch has the same graphical hardware within it, so all lessons here will apply to that class of device as well.

3-D hardware acceleration on the iPhone

The Samsung SoC also features an implementation of Imagination Technologies’ PowerVR MBX Lite 3D accelerator, ... This fourth-generation PowerVR chipset is basically an evolution of the second-generation graphics hardware used in the Sega Dreamcast...
"Under the Hood: The iPhone’s Gaming Mettle" - Touch Arcade

Before we get into the software side of 3-D rendering on the iPhone, it helps to know what we can expect from the platform. This has been the cause of some debate lately, so I wanted to gather some hard numbers of my own and compare them with other benchmarks out there. Specifically, I wanted to see how many triangles per second the iPhone is capable of pushing, to give an idea as to the kind of complex 3-D geometry and frame rates it can support.

I used the current version of Molecules that's in the App Store (with a few small tweaks) to perform these tests. I loaded molecules of varying complexity (and thus, different numbers of triangles required to render) and measured the time it took to render 100 consecutive frames of rotation for these molecules. From that, I obtained the frame rate.

Triangles in model Frames per second Triangles per second
9,720 27.81 270,334
14,072 16.90 238,179
22,540 9.95 224,211
31,000 7.50 232,612
36,416 6.44 234,691
48,856 5.61 274,278
86,480 2.73 235,772
122,896 1.90 233,276

The results show a throughput of 243,000 triangles per second, with a variability of about 7.7%. GLBenchmark has published OpenGL ES benchmarks on the iPhone in a variety of conditions. They seem to get a higher throughput under these conditions (no textures, smooth shaded, one spotlight) of 470,957 triangles per second. I don't know the exact mechanism of their benchmark, but the difference is most likely due to suboptimal code in my application.

These numbers need some context. I've tabulated the pure rendering performance statistics that I could find for other devices of interest:

Device Max triangles per second
iPhone [1] 613,918
Nokia N95 [2] 719,206
Nintendo DS [3] 120,000
Sony PSP [4] 33,000,000

[1] http://www.glbenchmark.com/phonedetails.jsp?benchmark=pro&D=Apple%20iPhone&testgroup=lowlevel

[2] http://www.glbenchmark.com/phonedetails.jsp?benchmark=pro&D=Nokia%20N95&testgroup=lowlevel

[3] http://en.wikipedia.org/wiki/Nintendo_DS

[4] http://gameboy.ign.com/articles/430/430939p1.html

I used the highest numbers I could find for each device, and real-world performance could be much lower, but I think the numbers tell a clear story. The highest ranked cell phone in GLBenchmark's tests was the Nokia N95, but it only holds a slight edge over the iPhone in pure performance. The bestselling portable gaming platform at the moment is the Nintendo DS, and the iPhone is clearly ahead of that, with up to five times the rendering performance. The Sony PSP figures are based on Sony's press numbers, and Sony is well-known for overstating the performance of their products, but even if real-world performance is only 20% of this it still comes out way ahead of the iPhone. If anyone in the PSP homebrew community has more reliable benchmark numbers to share, I'd be glad to post them.

Overall, the iPhone is a very capable device in terms of 3-D performance, even if it isn't the best of all portable devices. It does have a more powerful general purpose processor than the PSP (a 412 MHz ARM CPU vs. a 222 - 333 MHz MIPS CPU), which could lead to better AI or physics. It's important to remember that, outside of the homebrew community, very few small developers will ever have a chance to make something on the DS or PSP, while the iPhone SDK and App Store are open to all. Finally, the ease of programming for the iPhone far exceeds other mobile devices. For example, I wrote Molecules in the span of about three weeks, only working on it during nights and weekends.

Introduction to OpenGL ES

Now that we know what the 3-D hardware acceleration capabilities of the iPhone are, how do we go about programming the device? The iPhone uses the open standard OpenGL ES as the basis for its 3-D graphics. OpenGL ES is a version of the OpenGL standard optimized for mobile devices. OpenGL is a C-style procedural API that uses a state machine to allow you to control the graphics processor. That sounds confusing, but the gist of it is that OpenGL uses a small set of C-style functions to change to a certain mode or state, such as setting up lighting, or configuring the camera, or drawing the 3-D objects, and perform work in that state. For example, you can enable the lighting state via


and then send commands to configure the lighting. When you are ready to set up geometry, you change to that state and send the commands needed for that.

OpenGL seems incredibly complex, and it's natural to want to throw up your hands and walk away, but I urge you to just spend a little more time with it. Once you get over the initial conceptual hurdles, you'll find that there's a reasonably simple command set involved. I was a complete newbie to OpenGL two months ago, and now I have a functional application based on it.

OpenGL has been around for a while and has accumulated a number of ways of doing the same tasks, many of them less efficient than others, so the standards committee took the opportunity with OpenGL ES to start fresh and clean out a lot of the cruft. In fact, OpenGL 2.0 will actually be based on OpenGL ES. I won't get too far into what has changed for those familiar with OpenGL, but I'll just list a few things that I came across. First, OpenGL ES drops support for immediate mode. What this means is that the following code:

	glColor4f(1.0f, 1.0f, 1.0f, 10.f);
	glVertex3f(0.0f, 0.0f, 0.0f);
	glVertex3f(1.0f, 0.0f, 0.0f);
	glColor4f(1.0f, 1.0f, 1.0f, 10.f);
	glVertex3f(0.0f, 0.0f, 0.0f);
	glVertex3f(1.0f, 0.0f, 0.0f);

will not work in OpenGL ES. Many of the OpenGL examples you find online use immediate mode, and will need to be updated before they can work in OpenGL ES. Second, you can only draw triangles directly, not other polygons. Because any polygon can be made using triangles, this was seen as a redundancy. Third, OpenGL ES supports fixed-point numbers, which helps for performance on non-desktop devices that lack floating-point processing power. Finally, OpenGL ES supports smaller data types for geometry data, which can save memory, but can also cause problems. For example, unsigned shorts are the largest index data types you can use, which limits you to addressing only 65,536 vertices in one vertex array. This caused problems for me on some larger molecules until I started using multiple vertex arrays.

For more on OpenGL ES, I'd highly recommend the book "Mobile 3D Graphics: with OpenGL ES and M3G" by K. Pulli, T. Aarnio, V. Miettinen, K. Roimela, and J. Vaarala. Also, Dr. Dobb's Journal has a couple of excellent articles on OpenGL ES: "OpenGL and Mobile Devices" and "OpenGL and Mobile Devices: Round 2". The latter focuses on iPhone development and contains material under NDA that I will not discuss here.

With the introductions over, I'd like to highlight some specific things that I've learned during the design of Molecules.

Simulating a sphere using an icosahedron

In Molecules, each atom of a molecule's structure is represented by a sphere. I tried out a number of different ways of representing a sphere in OpenGL before I arrived at the approach I use now. I currently draw an icosahedron, a 20-sided polyhedron, as a low-triangle-count stand-in for a sphere. The code to set up an icosahedron's vertex and index arrays is as follows:

#define X .525731112119133606 
#define Z .850650808352039932
static GLfloat vdata[12][3] = 
	{-X, 0.0, Z}, {X, 0.0, Z}, {-X, 0.0, -Z}, {X, 0.0, -Z},    
	{0.0, Z, X}, {0.0, Z, -X}, {0.0, -Z, X}, {0.0, -Z, -X},    
	{Z, X, 0.0}, {-Z, X, 0.0}, {Z, -X, 0.0}, {-Z, -X, 0.0} 
static GLushort tindices[20][3] = 
	{0,4,1}, {0,9,4}, {9,5,4}, {4,5,8}, {4,8,1},    
	{8,10,1}, {8,3,10}, {5,3,8}, {5,2,3}, {2,7,3},    
	{7,10,3}, {7,6,10}, {7,11,6}, {11,0,6}, {0,1,6}, 
	{6,1,10}, {9,0,11}, {9,11,2}, {9,2,5}, {7,2,11} 

This code was drawn from the OpenGL Programming Guide, also known as the Red Book.

I should define the terms vertex and index for those new to OpenGL. A vertex is a point in 3-D space of the form (x,y,z). An index is a number that refers to a specific vertex contained within an array. You can think of this in terms of connect-the-dots, where the dots are vertices and the numbers that tell you what to connect to what are the indices. In the above example, you see triplets of indices that specify one 3-D triangle each. The array of vertices and array of indices combine to define the 3-D structure of your objects. Arrays of color values and lighting normals complete the geometry.

Speaking of lighting normals, I was able to squeeze a little more out of the icosahedrons and make them a little closer to spheres by turning on smooth shading using


and setting the normals for the model to point to the vertices. If the normals had been pointed toward the faces of the triangles, the edges of the polyhedron would be sharp and the object would be shaded like its geometry would seem to indicate. By pointing the lighting normals to the vertices, the object was shaded like a sphere and the faces were smoothed.

Improving rendering performance using vertex buffer objects

As mentioned above, OpenGL has been around for a while and has accumulated many ways of performing the same tasks, such as drawing. Under normal OpenGL, there are at least three ways of drawing 3-D geometry to the screen:

  • Immediate mode
  • Vertex arrays
  • Vertex buffer objects

Of these three, immediate mode support has been dropped in OpenGL ES (see code sample above). That leaves vertex arrays and vertex buffer objects.

Vertex arrays consists of what you'd expect: arrays of vertices, indices, normals, and colors that are written to the graphics chip every frame. They are reasonably straightforward to work with, and were the first thing I tried when implementing Molecules. To draw an icosahedron based on the above code, you'd use something like the following:

	glVertexPointer(3, GL_FLOAT, 0, vdata);
	glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_SHORT, tindices);

You first tell OpenGL to use the array of vertices (vdata) that was set up before, then draw triangles based on the array of indices (tindices).

I was using this to draw icosahedrons, which I was then translating to different locations using the glTranslatef() function. The per-frame rendering performance was not very good, which I figured was due to the multiple glDrawElements() and glTranslatef() calls, so I calculated all the geometry ahead of time and just did one large glDrawElements() call. I still wasn't happy with the performance, so I took a look at my program in Instruments and found that most of its time was being spent copying data to the GPU.

This is where vertex buffer objects come into play. Vertex buffer objects (VBOs) are a relatively new addition to the OpenGL specification that provide a means of sending geometry data to the GPU once and having it be cached in video RAM. The bus between the CPU and GPU becomes a severe bottleneck, and VBOs reduce or eliminate that bottleneck.

Setting up a VBO requires code similar to the following (extracted from the current version of Molecules):

m_vertexBufferHandle = (GLuint *) malloc(sizeof(GLuint) * m_numberOfVertexBuffers);
m_indexBufferHandle = (GLuint *) malloc(sizeof(GLuint) * m_numberOfVertexBuffers);
m_numberOfIndicesForBuffers = (unsigned int *) malloc(sizeof(unsigned int) * m_numberOfVertexBuffers);
unsigned int bufferIndex;
for (bufferIndex = 0; bufferIndex < m_numberOfVertexBuffers; bufferIndex++)
	glGenBuffers(1, &m_indexBufferHandle[bufferIndex]); 
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indexBufferHandle[bufferIndex]);   
	NSData *currentIndexBuffer = [m_indexArrays objectAtIndex:bufferIndex];
	GLushort *indexBuffer = (GLushort *)[currentIndexBuffer bytes];
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, [currentIndexBuffer length], indexBuffer, GL_STATIC_DRAW);     
	m_numberOfIndicesForBuffers[bufferIndex] = ([currentIndexBuffer length] / sizeof(GLushort));
[m_indexArray release];	
m_indexArray = nil;
[m_indexArrays release];
for (bufferIndex = 0; bufferIndex < m_numberOfVertexBuffers; bufferIndex++)
	glGenBuffers(1, &m_vertexBufferHandle[bufferIndex]); 
	glBindBuffer(GL_ARRAY_BUFFER, m_vertexBufferHandle[bufferIndex]); 
	NSData *currentVertexBuffer = [m_vertexArrays objectAtIndex:bufferIndex];
	GLfixed *vertexBuffer = (GLfixed *)[currentVertexBuffer bytes];
	glBufferData(GL_ARRAY_BUFFER, [currentVertexBuffer length], vertexBuffer, GL_STATIC_DRAW); 
	glBindBuffer(GL_ARRAY_BUFFER, 0); 
[m_vertexArray release];
m_vertexArray = nil;
[m_vertexArrays release];

This looks quite complex, but let me explain what's going on here. Remember when I mentioned above that indices were limited to a max of 65,536 and that would cause the need for multiple index and vertex arrays? You can see that here. I loop over each vertex and index array and set them up as buffer objects. glGenBuffers() starts things off by creating a new buffer and passing back a handle to that buffer. glBindBuffer() lets you set what type of buffer to use, GL_ELEMENT_ARRAY_BUFFER for indices, or GL_ARRAY_BUFFER for vertices, normals, and colors. I created my arrays using NSData objects, so I grab their bytes for use in glBufferData(), which sets up the transfer of those bytes to the GPU. Note the use of GL_STATIC_DRAW. This hints to the GPU that this vertex buffer object will not change, and allows OpenGL to optimize for that condition. Finally, the vertex buffer object is unbound.

Note that once the geometry has been sent to video RAM, there's no need to keep it around locally, so all NSData objects containing the arrays are released.

Now, you just need to refer to these VBOs every frame when you do your drawing and the geometry data will all be handled locally by the GPU:

glEnableClientState (GL_VERTEX_ARRAY);
glEnableClientState (GL_NORMAL_ARRAY);
glEnableClientState (GL_COLOR_ARRAY);
unsigned int bufferIndex;
for (bufferIndex = 0; bufferIndex < m_numberOfVertexBuffers; bufferIndex++)
	glBindBuffer(GL_ARRAY_BUFFER, m_vertexBufferHandle[bufferIndex]); 
	glVertexPointer(3, GL_FIXED, 0, NULL); 
	glBindBuffer(GL_ARRAY_BUFFER, m_normalBufferHandle[bufferIndex]); 
	glNormalPointer(GL_FIXED, 0, NULL); 
	glBindBuffer(GL_ARRAY_BUFFER, m_colorBufferHandle[bufferIndex]); 
	glColorPointer(4, GL_UNSIGNED_BYTE, 0, NULL);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_indexBufferHandle[bufferIndex]);    
	glDrawElements(GL_TRIANGLES,m_numberOfIndicesForBuffers[bufferIndex],GL_UNSIGNED_SHORT, NULL);
	glBindBuffer(GL_ARRAY_BUFFER, 0); 
	glDisableClientState (GL_COLOR_ARRAY);	
	glDisableClientState (GL_VERTEX_ARRAY);
	glDisableClientState (GL_NORMAL_ARRAY);

Again, I loop through the sets of vertex buffer objects I've created. First a buffer is bound from its handle that we had received earlier using glBindBuffer(). Then the type of data contained within that buffer is identified using a function such as glVertexPointer(). Note the use of the NULL as the last argument, where in the vertex array case we had an actual array pointer. That signifies the use of a VBO instead of an array. Finally, we call glDrawElements(), again with a NULL last argument.

As an anecdotal data point, I achieved approximately a 4-5X speedup by switching from vertex arrays to VBOs. Unfortunately, I don't have hard numbers to back this up and that value may be convoluted with other optimizations I was doing at the same time. In any case, you will see a performance boost from going with VBOs. Now, when I run Instruments, the most-called function is glDrawTriangles, indicating that most of the time is being spent by the GPU actually drawing to the screen and not waiting on data to be transmitted.

The downside to the VBO approach is that you will need to do all the geometry calculations yourself to lay out all the vertices, normals, and indices. Also, this example worked really well with the VBO concept, because a molecule is rendered once and then just manipulated. For objects that change their shapes, new geometry will need to be sent every frame. VBOs can still give an advantage here, as only the parts of the VBO that change need to be transmitted. Also, all geometry data will be transferred using DMA, so you will see performance improvements due to that as well.

Rotating the 3-D model from the user's point of view

One last trick I wanted to pass along was how I accomplished the rotation of the 3-D molecules using the multitouch interface. Molecules uses the multitouch interface of the iPhone to perform rotation by moving one finger left-right or up-down on the screen, to zoom in and out on the molecule using a pinch gesture, and to pan around the molecule using two fingers moving at once.

Zooming in and out was simply a matter of calculating what the zoom factor was from how far apart your fingers are in the pinch gesture and calling

	glScalef(x, y, z);

where x, y, and z are the scale factors in all three axes (1.0 being no change at all, 0.5 being half the size, and 2.0 being double the size).

Likewise, translation was done using

	glTranslatef(x, y, z);

where x, y, and z are the amounts by which to offset the object in all three axes.

Rotation was the tricky one. Rotation of an object is done by using

	glRotatef(angle, x, y, z);

where angle is how much rotation to perform. x, y, and z define the axis about which the model is to rotate. For example (1, 0, 0) means for the object to rotate about the X-axis (imagine a rotisserie chicken). Where this gets tricky is that the axis we're talking about is in the model's coordinate system. When a user swipes his finger across the iPhone screen, he is looking for rotation to occur relative to his perspective, which is different from the model's. After a bit of matrix math, it turns out you can do this conversion using the following:

	GLfloat currentModelViewMatrix[16];
	glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);	
	glRotatef(xRotation, currentModelViewMatrix[1], currentModelViewMatrix[5], currentModelViewMatrix[9]);
	glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
	glRotatef(yRotation, currentModelViewMatrix[0], currentModelViewMatrix[4], currentModelViewMatrix[8]);

This code grabs the current model view matrix, a 4x4 matrix that describes all the scaling, rotation, and translation that's been applied to the model to this point. It turns out that if you grab the elements of the matrix seen above, you can solve this complex geometry problem with just a couple lines of code. This was a real headache and hopefully will save someone else some time.

One thing to be careful of is the use of the glGetFloatv() functions. I will be removing these from Molecules, because they do lead to a decrease in rendering performance by halting the OpenGL rendering pipeline. This could be part of the reason for the lower benchmark numbers I saw when compared to GLBenchmark's above.


I'd like to place a caveat on all the above by saying that I am a relative newbie when it comes to OpenGL. I may not be explaining concepts properly or may be showing you nonoptimal means of drawing using OpenGL. I welcome any and all comments on this, and it's one of the reasons why I will open source Molecules. I want to learn from the experience of others and make a better, faster product. Also note that no mention was made of textures here, due to their absence in Molecules.

Thanks again go to Chris Foresman at Ars Technica for giving me the chance to share my insights at the iPhoneDevCamp. Hopefully, you will find some of this useful, even without the complete Molecules source code. I'd love to see more programs, especially scientific ones, take full advantage of the iPhone's graphical hardware.


Thanks so much for this post. I was really banging my head against the wall. Nothing else I've read laid out these basic concepts with workable code. Thank you so much.

thx gldrarrys whill crashed when i push to many data..

"...and Sony is well-known for overstating the performance of their products..."

Hmmm...interesting. I would say that it's the anti-sony camp who frequently say this sort of thing without acutally looking at the performace of the hardware. The sort of people who, would probably have a heart attack when asked to write vuo or vu1 assembler, or even just having to write code in an asyncronous envinroment such as the PS2 or PS3. They just have no clue.

Those PSP figures, like anything from nVidia, ATI or anybody else rely on you rendering a flat shaded triangle, that only occupies 3 pixels or so. So assuming you only want to render very small triangles, it's a pretty accurate figure of the hardwares ability to fill vram with fixed function rasterizing. Real world figures would probably drop that to around 50%-60% with a lot of time being spent waiting for VRAM to load data into VRAM, as well as normal CPU side code running and having to synch.

Nice article on the difference between using the traditonal vertex/index arrays and vertex buffer objects on the iPhone. I've yet to get a Mac to do development on an actual device, but my framework doesn't use VBO's yet, as I'm just working under a Win32 environment trying to make the code as portable as possible. I may add VBO support today, assuming I have the time.

I apologize if my wording came off as a little incendiary. All hardware manufacturers pull the trick of publishing the best possible theoretical numbers for their hardware. This particular statement comes from a recollection that I had of there being some argument over whether the PlayStation 2's listed specs were achievable, although I don't have a reference for that. This is why I would appreciate real-world numbers from anyone familiar with the PSP hardware.

No matter the exact numbers, the PSP clearly outclasses all listed devices in terms of 3-D capability. However, as you point out, I doubt that it is anywhere near as easy to program for as the iPhone, so achieving maximum performance might take a little knowhow. Even more importantly, it is nearly impossible for independent developers to sell their applications on the PSP platorm.

For the record, I do think that Sony's engineers produce hardware with outstanding performance. I purchased a PlayStation 3 and installed Gentoo Linux on it with the intention of running some massively parallel computations using the Cell processor, but haven't gotten around to it yet. In the meantime, it's a very nice Folding@Home box that also plays Blu-ray discs and games. The processing power that $400 buys you today is astounding.

You're awesome! I've been trying to figure out how to rotate an object based on the current position for a week now. Every other article I found was sending me down a road of calculus to do something that I thought should be basic. It took me two minutes to implement your solution.


I tried to let this simple code run, but it crashes at glDrawElements() line. I really don't know how to make this right. Do you have any suggestions to solve the problem?

Thanks a lot.

Code snippet

// These are from the OpenGL documentation at www.opengl.org
#define X .525731112119133606
#define Z .850650808352039932

static GLfloat vdata[12][3] =
{-X, 0.0f, Z}, {X, 0.0f, Z}, {-X, 0.0f, -Z}, {X, 0.0f, -Z},
{0.0f, Z, X}, {0.0f, Z, -X}, {0.0f, -Z, X}, {0.0f, -Z, -X},
{Z, X, 0.0f}, {-Z, X, 0.0f}, {Z, -X, 0.0f}, {-Z, -X, 0.0f}

static GLushort tindices[20][3] =
{0,4,1}, {0,9,4}, {9,5,4}, {4,5,8}, {4,8,1},
{8,10,1}, {8,3,10}, {5,3,8}, {5,2,3}, {2,7,3},
{7,10,3}, {7,6,10}, {7,11,6}, {11,0,6}, {0,1,6},
{6,1,10}, {9,0,11}, {9,11,2}, {9,2,5}, {7,2,11}

- (void)drawView {

[EAGLContext setCurrentContext:context];

glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);

// Set up the stage
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glRotatef(3.0f, 0.0f, 0.0f, 1.0f);

// The background color
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

glVertexPointer(36, GL_FLOAT, 0, vdata);
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_SHORT, tindices);

// Render everything
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];

Looks like there is typo in the code. Instead of
glVertexPointer(36, GL_FLOAT, 0, vdata);


glVertexPointer(3, GL_FLOAT, 0, vdata);
It should solve the problem.

I really appreciate Dr. Brad Larson work of Vertex Buffer Objects (VBO). It is very good code and well written.

Keep up good work Dr. Brad Larson

Thank you very much for publishing this article. I am working on a particle application and really looking forward to seeing how VBO will improve the rendering speed.

Hi! This post is very interesting, thank you!!

I'm working to apply the developed software in the doctorate in my Iphone. I started with obj- C ...it's ok!
But I have a problem to import the models (created in 3D Max) and georefered.
I thought using google earth...but I don't know. Perhaps to find a good game engine could be a solution (e.g. unity..)
I should use models in citygml....

If you have suggestions, please write me.

For importing 3-D models, I would recommend Bill Dudney's example code for a Wavefront OBJ loader that he posted here. The OBJ file format is reasonably standard for 3-D models, so you should be able to convert your models into that format.

Thank you! I started to work with Blender + SIO2_SDK_v.1.3.5, but I check Dudney's example code for a Wavefront OBJ!

I will stay in touch with you!

i am rendering a 3d model of the human body consisting of about 69.000 triangles. as the rendering performance was not too good with array indexing i tried vertex buffer objects. the result is that it is now even slower (every 15! seconds one frame)
i dont know why this behaves so badly. is it because of indexing matters? should i rather prefer interleaved structure like vertexinfo:(pos,normal) for each vertex field in the array instead of having two independent vbos for normals and vertices.

my render method looks like this:

glBindBuffer(GL_ARRAY_BUFFER, handleVerts);
glBindBuffer(GL_ARRAY_BUFFER, handleNorms);
glNormalPointer(GL_FLOAT, 0, (GLvoid*)((char*)NULL));
//Loop through faces and draw them
for(int i=0;i<numOfFaces;i++) {
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, handleFaces[i]);
	glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, (void*)0); 
	glBindBuffer(GL_ARRAY_BUFFER, 0); 


handleFaces was set up like this

handleFaces = (GLuint*) malloc(sizeof(GLuint)*numOfFaces);
for (int i=0; i< numOfFaces; i++) {
		unsigned short buffer[EDGE_SIZE];
		glBufferData(GL_ELEMENT_ARRAY_BUFFER, unsigned short*EDGE_SIZE, buffer, GL_STATIC_DRAW);

ps.: sorry for the 2 previous junk posts...

btw. rotations and translates are done on the modelview matrix so the 3d-model data stays untouched

glDrawElements() is expensive. If I'm reading this right, you're calling it on every vertex, which will lead to terrible performance. Only use glDrawElements() once per VBO and I think you'll see your render speeds jump up dramatically. You can also see this by running your code using the OpenGL ES template in Instruments and seeing where your hotspots are in the CPU Sampler instrument.

Brad Larson wrote:
glDrawElements() is expensive. If I'm reading this right, you're calling it on every vertex, which will lead to terrible performance. Only use glDrawElements() once per VBO and I think you'll see your render speeds jump up dramatically. You can also see this by running your code using the OpenGL ES template in Instruments and seeing where your hotspots are in the CPU Sampler instrument.

Well i adapted to the molecules code. What I do is calling glDrawElements on every Triangle-IBO. The thing is that there are alltogether 69.000 of them. handleFaces is an array of handles where each handle points to an unsigned short array of size 3 (Triangle Edge-Size). Is there a way to call glDrawElements on the whole IBO Array instead of running it in a loop? Sorry, I am kinda new to Open GL (ES).

If you notice, I load approximately 65,000 indices, vertices, normals, or colors in each VBO. I then use one draw call, as I describe above in the article and in the code, per VBO. In your case, you should need only two VBOs total for your vertex data.

I caution you, you will only get ~9 FPS in the best case on anything but the iPhone 3G S with the number of vertices you're working with here.

Thanks a lot for your quick responses. Now everything has become clear. Yeah I know that rendering after all wont work fluently but with a bit of LOD processing and adapting the (desktop) models to iPhone resolution/screen size the rotation shall become smooth.
At least I hope so ;)

Hi Brad,

Yes, I like this post, quite interesting for beginner level GL-ES devs - Just one thing, please can you correct this sample code:

glVertexPointer(36, GL_FLOAT, 0, vdata);  // <-- 36 should be 3
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_SHORT, tindices);

To this:
glVertexPointer(3, GL_FLOAT, 0, vdata);
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_SHORT, tindices);

I'm at least the 2nd person reading this post and ending up spending quite a while working this out, in this case because I reused the underlying assumption while struggling with another half dozen bugs in the early morning...

Sorry about that, I don't know how I missed it before. The incorrect code should be fixed now.

Thanks for the information. I am investigating the possibility of embedding similar material in university curriculum. It has taken a lot of time for universities to respond to the iphone developer market by including developer content in the curricula. Do you think inclusion can benefit the developer community as a whole ? I would be interested to know your thoughts.

Dear Brad

I liked your molecules program. As a physician working for the pharmaceutical industry is valuable to explain the molecule structure in relation to activity.

I use Jmol for most of my presentations. But nowdays people rely on the smartphones to show quickly students and health professionals a particular molecule. I am working on developing some app for smartphones (not just iPhone). It would be nice if your product could be avaialble to other plataforms.

I develop flash applications using flex builder (adobe.com/flex) and I found a library that allows multiplattaform development (http://www.openplug.com) But since I do not know C objects, it is very difficult for me to faciliate such developement from your source code.

Any plans to include other plataforms or to make it available in flash? Or any documentation of "your source code for dummies" like me that I can use to give it a try to port to other operating systems?

Kind regards

HPatino, MD

I will not be porting this to any other platform, save the iPad. I do my development in Cocoa because it lets me do far more in a shorter period of time than any other development environment I've ever used.

That said, I do document the general design of the application, and Cocoa is pretty descriptive, so it shouldn't be too hard to figure out the general operation of the application. The OpenGL ES rendering code should be platform-independent, so much of that can be ported straight across.

Unfortunately, I don't think you'll find any decent way of making truly cross-platform applications. You need to target each platform individually or you end up with only the lowest common denominator between them. This application in particular needs high-performance 3-D graphics capabilities, something you're not going to get with cross-platform toolkits. Forget Flash for mobile application development and go with the native development environments.

My recommendation would be to focus on Android development, as that is the only major competitor to the iPhone in the mobile application development space right now. BlackBerry development is not attractive, Windows Mobile won't be relevant until the end of 2010, and Palm's WebOS hasn't made significant inroads.

Can please post a simple tutorial, having a cube or sphere and rotating it in every direction using finger touch? Please help with this, not getting rotation right.

If you want something simpler, check out this sample application I created for my advanced iPhone development course. It displays a simple textured cube and lets you rotate it with your fingers, which sounds just like what you're asking for.

To see how this works, you can download the video for my OpenGL ES class from iTunes U.

it gives error when i compile the project ..
and even i cpuld not find any glrotate api , i guess u calculating manually ad loading the modelview matrix ..

What compilation errors are you seeing? The application was developed using the latest iOS 4.0 SDK, and it compiles just fine on my system using the latest SDK and Xcode 3.2.3. All of my students used it just fine last semester.

As far as not using glRotate(), yes that is an optimization that I implemented later on. See this for more. I use the internal rotation functions from Core Animation to generate the appropriate matrix, then apply that to the model view matrix. This saves a couple of matrix reads that were halting the rendering pipeline.

Thanks for all the tips. Great stuff! In your sample application, CubeExample, can you help me understand how to convert the glOrtho to glFrustum. I just substituted them with each other and it compiled and ran but the rotating of the cube was wacky. It was diamond shapped and just a bit weird. I figured it had to do with the CATransform3D being optimized for Ortho but I can't figure out how to make everything rotate correctly. Any advice would be very helpful. Thanks.

hiii , im also facing thr rotation issue ... i tried your method in iphone but its not wrking, still same problem exists ..
when i rotate right side for 90 degree and then on upside it rotates with repect to z axis instead of x axis ..
can you please help ..
i used the exact lines given by u

GLfloat currentModelViewMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(xRotation, currentModelViewMatrix[1], currentModelViewMatrix[5], currentModelViewMatrix[9]);
glGetFloatv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);
glRotatef(yRotation, currentModelViewMatrix[0], currentModelViewMatrix[4], currentModelViewMatrix[8]);

but not working ...
and even the example project shared by you is giving errors when complied .. so could you plz help me ..
thank you

I'm not sure what you are asking. The rotation of the model occurs relative to the user, When they move their finger left, it rotates in that direction about an imaginary Y axis that is parallel to the screen (note: left and right directions appear to be reversed in my cube example). Likewise, when they move their finger up, the model rotates about an imaginary X axis that is parallel to the screen.

This is simpler than a trackball-style rotation, and I personally prefer it to that rotation style. I accomplish this using matrix manipulation like you describe above, which adjusts the rotation for the current model view matrix coordinate system.

Yet another big *thank you* for posting these tutorials!

Really a nice tutorial :)

Hey Brad, Great Tutorial. I am trying to create the same puzzle game using iphone sdk. Is there a way I can port your code to iphone SDK? You have it for Mac OSX. Please correct me if I am wrong somewhere.

Um, Molecules is an iPhone / iPad application, not a Mac application. There isn't any Mac code here in anything that I'm talking about. All of this is for iOS.

I am sorry for confusing you Brad. I am referring to your cube example application to develop an sample application similar to rubik's cube puzzle (which has 3x3 matrix). I am new to openGL. so, can you guide me(with logic / source code) on how to draw a 3x3 square matrix over your cube example please?

I am trying to make a simple 3D dice rolling application for the iPhone, which will not be released to the public because of my terrible inexperience. For some reason, I cannot seem to make my UIView subclass draw anything at all. In addition, I would like the view to have a clear background. Could you please point me to a simple guide that has ONLY the code needed? I am very good with C and Objective-c, so I can easily pick OpenGL apart and see what everything does. My problem is that I cannot find the full code, and I can only get drawing info for OS X.

If you want just the basics for setting up an OpenGL ES scene, it's hard to beat starting an application using Apple's OpenGL ES Application template in Xcode. This will give you boilerplate code and a working UIView that hosts OpenGL ES content. You can expand upon it from there.

For a slightly more detailed example of a rotating cube, you can look at the source code for a sample I constructed for the OpenGL ES 2.0 session of my iTunes U course.

Syndicate content