Creating 2D or 3D views from a finite set of input images has been a long-standing goal in Computer Vision - this project revisits this fundamental problem from a new perspective. The range of applications is vast (e.g. computer games, 3D Photosynth browsing, 3D image manipulation, etc.), especially in the light of the latest developments in the hardware domain (3D television, stereo point-and-shoot cameras). One of the major challenges of new view synthesis is to reconstruct both the depth and the colour of those pixels which were not visible in any of the input images. We term this problem 3D scene completion. Our hypothesis is that by extracting and modelling physical aspects of the scene, such as geometry, light and camera, it is possible to achieve improved results over competing methods which operate exclusively on the image level.