Thursday, September 17, 2009

Overlaying views on UIImagePickerController's view

Recently, iPhone SDK 3.1 was out. In the update, Apple now allows developers to overlay their own views on the preview view of the UIImagePickerController. Before the SDK 3.1, PLCameraController, a private API, is required or we have to do some hack in the view hierarchy of the UIImagePickerController.

On iPhone SDK 3.1, UIImagePickerController has a property, named as cameraOverlayView defined as :

The custom view to display on top of the default image picker interface.
@property(nonatomic,retain) UIView *cameraOverlayView

When I first met this property, I misunderstood what the property means. That is, I thought that the property is the camera preview view itself. But I was wrong. cameraOveralyView doesn't give us camera preview view. What we can do with cameraOverlayView is just adding my own view to the UIImagePickerController's view and nothing more.

This is very limited functionality for augmented reality because still, it is not possible to get the raw video data from the camera. However, if you just need the background video, this is quite an easy solution for you.

The cameraOverlayView works as follows (I implemented this in an IBAction of a view controller):

//Let's add and image view on the video
UIImage *image = [UIImage imageNamed:@"mylogo.png"] ;
UIImageView *imgView = [[UIImageView alloc] initWithImage:image] ;
imgView.bounds = CGRectMake(100,100, 128, 128) ;
UIImagePickerController *camera = [[UIImagePickerController alloc] init];

// Hide Apple's UI
camera.sourceType = UIImagePickerControllerSourceTypeCamera;
camera.showsCameraControls = NO;
camera.navigationBarHidden = YES;
camera.toolbarHidden = YES;
camera.wantsFullScreenLayout = YES;

// Add the view to be overlaid
camera.cameraOverlayView = imgView ;
[imgView release] ;

// Show the camera's view as a modal dialog.
[controller presentModalViewController:self animated:YES];

Somehow, it is a little bit weird that modal dialog is required. But it is the way we use UIImagePickerController as Apple allows only this way. If all procedures are OK, you will see something like this. A logo is overlaid on the camera's preview view.

If you want to make the video fill the entire screen, change the transformation property of the UIImagePickerController through the property 'cameraViewTransform', which is also a new feature in 3.1.

CGAffineTransformScale(self.cameraViewTransform, 1.0, 1.13);

Thus, now it is very easy to make some augmented reality applications using 'cameraOverlayView' feature. I made a simple application that displays annotations in the scene by using the iPhone 3GS's compass.


  1. How would you add a button to your overlayed image to take a picture with the takepicture: method?

  2. Hi Wonwoon,
    We had a brief discussion thru youtube and you referred me here.

    So If I understand this right UIImagePickerController gives you the same functionality as the PLCameraController??

    I have a problem that I can't solve right now and I hope that you can help me on the way, I'm fairly new at iPhone development and Objective-C ( so lucky me :)).

    Here's what I'm struggling with.
    I need to (simplified version): Create a camera view, take a picture(s), manipulate the image data.

    At first I implemented the UIImagePickerController and got it to work, taking and storing pictures. But when I looked at the pictures I saw that it was not the "real" pictures, it was only screenshots with the size of 320x480. I started google the web for any suggestions on how to get the real image data (the full size picture) and I then stumbled upon a thread saying that I should use the PLCameraController instead. I've tried to implement the PLCameraController, I get it to launch the camera but now I'm stuck.

    You seem to have good knowledge about this area, therefore I would like to ask you a couple of questions.

    1. Should I use UIImagePickerController instead of PLCameraController developing on 3.1?
    2. Is it possible to get the full size picture using UIImagePickerController?
    3. If the only way forward is to use hte PLCameraController, would you like to provide me with some simple sample code?

    Please help me out!

  3. Hi, drisse.

    If you want to get the image with full resolution (something like 1536x2048 on iPhone 3GS), you just need UIImagePickerController. It provides the full resolution of image as UIImage type.

    There are many of examples about how to capture image through UIImagePickerController.
    Please refer this tutorial :

    PLCameraController is a private framework, and it is not a good idea to use it. In addition it is not really required for capturing a photo. UIImagePickerController is enough if you are working with photos, not with video streams.

  4. hello,

    i have the same problem with drisse, but then i need reference on how to create iphone application with additional augmented reality object..just like the youtube video in your previous post.

    hope you can create an entry on how codes the opengl ES and the MySQL. and explain how the codes can detect the object and at the same time display AR object..

    thank you so much woowon & have a nice day...

  5. Well, doing AR on iPhone is not that difficult.
    You can just add OpenGL ES View on top of the video preview by using cameraOverrayView feature of UIImagePickerController.

  6. can you please describe how to make augmented reality app like in video? I understood the part with overlaying view and placing just static image, but how do you make it so images position themselves correctly on screen when you turn your iphone?

  7. Anonymous/ That is a very simple trick. I didn't change the position of the objects (like views and images) in this example. Instead, I made a UIView that contains all images and changed its position depending on the value coming from compass.

  8. Hello Wonwoo,

    That is a cool example! So the location of the graphics in your example is based on the compass, but is there a way to actually capture what the camera is "seeing" through the cameraOverlayView and modify it's contents based on that? Do you know maybe where I can find something on that?


  9. Hello Kelly.

    If you want to use images coming from the camera, you can use iOS4's camera API for image capture and display. The you compose your own view hierarchy to display contents on the video, while doing some image processing.

    WWDC2010 video tutorial provides very good examples to do that.
    Hope this helps.

  10. Hello Wonwoo,

    I am really impressed with the example you have shown on youtube. I am also trying to make something similar but I am not finding it so easy!
    At the moment I am trying to read in latitude & longitude values from a database and add a UIView according to each (lat,lon) and display it on the screen. Would this be a correct way to approach it?
    Also, what would be the best way to connect to a remote database?

    Many thanks,

  11. Hello Gillian.
    I'm not actually familiar with managing database and remote connection. Sorry for that.

  12. Hello Wonwoo,
    first of all congratulations for your blog, your works are really interesting for me.
    I'm doing my final degree project with AR in the iPhone. My problem is that I can't set my images in the view without a static place. For example, I put a label in the view when the compass is indicating the north position, but it doesn't maintain its position when I turn the iphone, it disapears completely and only appears again when I put the iphone in the north direction.
    I read your answer to an anonimous person who asks you a similar question but I can't understand you in your answer :"I made a UIView that contains all images and changed its position depending on the value coming from compass."
    Thank you!

  13. I have another you use the accelerometer for this application or just the compass?
    thanks again

  14. Hello Alberto.

    You can change the position of a view by setting its center.
    For example, something like this.

    UIImageView * imageView = [...] ; = CGPointMake(xx,yy) ;

    So what I did is just changing the view's center based on the values coming from the compass. It is an easy task.

    Considering the demo video above, I didn't use the accelerometer.

    One thing you have to be careful is when you set a new center of a view, the result will not be updated on the screen until the function is returned. Using block operation is a good way to do this kind of job.

  15. So how do you map the compass values into the screen pixels?
    You have the magneticHeading value taken from the CLLocationDirection class that is in degrees, and then the 320 pixels of the screen (in portrait mode).
    I can't imagine how do you transform this values.
    Sorry for repeat the question, I cannot continue without resolving this problem and it helps a lot because what you do in the video demo is exactly what I want to do in my application.

  16. @Alberto / It was just simple mapping between the compass values and view coordinates. I don't remember the exact code, but you can do something like this.

    0 : 360 = 160 : (view.frame.size.width - 160)

    Easier way is to render the contents in an OpenGL ES view, where you can use compass values directly for rotating the contents around the user.

  17. Hello Wonwoo, okay? I liked much of his material, you have done well, I'm trying to make an application similar to video, and saw that several people here too, but am having great difficulty, managed to show up ima files, but it is not cool, you could suddenly releasing a source code of a simple application, like the video? Thank you very much.


  18. If there is no camera control (showsCameraControls), how can i capture the picture?.