Unrest VR is an interactive non-fiction experience inspired by Jennifer Brea’s feature documentary Unrest (Sundance 2017 Special Jury Award). An immersive journey into Jen’s experience of an invisible illness, myalgic encephalomyelitis, the project contrasts the painful solitary confinement of a bedroom world with the kinetic freedom of an inner dreamscape. When you’re too sick to leave your bed, where do you go?
Creating a Virtual Syria
Trojan Nonny de la Pena’s Immersive Journalism Hits Big in Davos
For more information on De La Pena’s work, please visit: http://www.nonnydlp.com/
Big players in VR film:
“… In the 1980’s Levoy worked on volume rendering, a technique for displaying three-dimensional functions such as computed tomography (CT) or magnetic resonance (MR) data. In the 1990’s he worked on 3D laser scanning, culminating in the Digital Michelangelo Project, in which he and his students spent a year in Italy digitizing the statues of Michelangelo. In the 2000’s he worked on computational photography and microscopy, including light field imaging as commercialized by Lytro and other companies. At Stanford he taught computer graphics and the science of art, and digital photography. Outside of academia, Levoy co-designed the Google book scanner, launched Google’s Street View project, and currently leads a team in Google Research that has worked on Project Glass, the Nexus HDR+ mode, and the Jump light field camera for Google Cardboard…” http://graphics.stanford.edu/~levoy/
Stanford – The Digital Michelangelo Project
“… As an application of this technology, a team of 30 faculty, staff, and students from Stanford University and the University of Washington spent the 1998-99 academic year in Italy scanning the sculptures and architecture of Michelangelo. As a side project, we also scanned 1,163 fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. We are currently back in the United States processing the data we acquired. Our goal is to produce a set of 3D computer models – one for each statue, architectural setting, and map fragment we scanned – and to make these models available to scholars worldwide…”
“The motivations behind this project are to advance the technology of 3D scanning, to place this technology in the service of the humanities, and to create a long-term digital archive of some important cultural artifacts.”
David Rittenhouse’s 1771 Orrery
“David Rittenhouse’s 1771 orrery is a physical model of the solar system and is known for its sophisticated mechanical design at its time. To demonstrate its functionality freely without the possibility of damaging this antique, we proposed to design and develop an interactive virtual reconstruction of the Rittenhouse Orrery.”
“Our goal is to re-introduce the Rittenhouse Orrery to a new audience to preserve the original, and convey its operation to the public. We hope to create a historic preservation project though computer graphics, and also to build a bond between the women in the WiCS residential program while we create this community enhancing project.”
The reredorter or necessarium (the latter being the original term) was a communal latrine found in mediaeval monasteries in Western Europe and later also in some New World monasteries.
Needs a bit of TLC .. but almost there 😉
73 photos taken on Sony RX100 M3 (4k Raw). Processed using Photoscan and Meshmixer
PUblication: The Visual Turing Test for Scene Reconstruction
Qi Shan, Riley Adams, Brian Curless, Yasutaka Furukawa, and Steven M. Seitz
Sifted – A film by Dan Monaghan & Ben Torkington
“Sifted is a 7 minute animation set in Point Cloud world. The film’s sole character, Mia Straka walks through a vast digital landscape, made up of rustic New Zealand places blended into the the past’s landmark monuments. The models in the film were made through via the technique of Photogrammetry, the use of photography in surveying and mapping to measure distance between objects. What this allowed us to do was to take photos of a building, then reconstruct the points of detail that make up that object. In the most common use outside of our project, the 3d model would be meshed and be seen as a solid object. Dan Monaghan’s vision was different, he wanted the raw points to be seen instead, providing the ethereal mood that makes the film quite different from most CGI.”
“…it [VR] really is not a filmic medium. Although film is one of the contributors, it is an interactive medium and if there isn’t interaction, you’re not using the digital.”
— Janet Murray, Georgia Tech Associate Dean and Professor
on Virtually There Conference
Publication: Ray-Casted BlockMaps for Large Urban Models Streaming and Visualization
By Paolo Cignoni, Marco Di Benedetto, Fabio Ganovelli, Enrico Gobbetti, Fabio Marton, Roberto Scopigno
The Swedish Pompeii Project
The Swedish Pompeii Project started in 2000 as a fieldwork project initiated at the Swedish Institute in Rome. The aim was to record and analyse an entire Pompeian city-block, Insula V 1….
Simultaneously a new branch of advanced digital archaeology, involving 3D reconstructions and documentation methods was added to the project agenda. The insula was scanned during the field campaigns in 2011 and 2012 in collaboration with the ISTI (Istituto di Scienza e Tecnologie dell’Informazione “A. Faedo”) in Pisa and the Humanities Lab at Lund University. The actual work in progress, carried out by the ISTI in Pisa, consists of finding a way to navigate the models easily, “naturally”, and in such a way that it will be possible to freeze the image of a wall or other detail under study and link this image back to the documentation offered by this web page. The results will be presented shortly and our first 3D models are already available in open access.
Joint Interactive Visualization of 3D Models and Pictures in Walkable Scenes
This short paper briefly describes the joint 3D-2D visualization technique employed by PhotoCloud to seamlessly enrich the 3D scene visualization with data consisting of 2D calibrated pictures.
Visionary Cross project
These are static images of the full resolution scans taken using Meshlab, the CNR’s open source 3D software. The meshes are almost complete: there are a couple of rough places when the different shots don’t quite line up yet (though none in these screen shots), and Matteo and Marco have yet to add any colour information to them.
Publication: Documenting and Monitoring Small Fractures on Michelangelo’s David
FlexMolds: Automatic Design of Flexible Shells for Molding – SIGGRAPH Asia 2016
In the Eyes of the Animal by Marshmallow Laser Feast
In the Eyes of the Animal, a journey through the food chain, is an artistic interpretation of the sensory perspectives of three British species. Created using Lidar scans, unmanned aerial vehicles (UAVs) or drones & bespoke 360° cameras, the piece is set to a binaural soundscape using audio recordings sourced from Grizedale Forest in the north of England.
Assent by VRTOV
In 1973 my father witnessed the execution of a group of prisoners captured by the military regime in Chile, the same Army that he was part of. Assent puts the user in my father’s boots as we walk to the place where that happened. Assent is an autobiographical immersive documentary developed in the Unity engine for the Oculus Rift Virtual Reality headset.
RecoVR: Mosul takes you on a tour of the virtual Mosul Museum in Iraq to showcase the work of Rekrei (formerly known as Project Mosul), a heritage project dedicated to restoring lost cultural heritage through photogrammetry and 3D modeling. The tour highlights antiquities in and around the Mosul Museum that were destroyed by Islamic State in 2015. It tells the story of their destruction and explains how The Economist and Project Mosul reconstructed the antiquities and built the virtual museum.
While a Fellow at the MIT Open Documentary Lab, Ben Khelifa transformed his latest project, The Enemy, from a photo exhibition into a virtual reality installation. This immersive installation uses virtual reality to bring the audience into contact with soldiers from opposite sides of longstanding global conflicts. He has further developed the project as a CAST Visiting Artist, in collaboration with Fox Harrell of the Imagination, Computation and Expression (ICE) Laboratory. Together, they are incorporating concepts from cognitive science and artificial intelligence-based interaction models into the project, with the goal of testing whether a VR installation can engender empathy and humanistic reflection for each side of the story through listening to the soldiers’ testimonies.