Simulating the ecology of lakes, visualizing periodic table, and reconstructing a virtual basilica in Rome.
Analyzing body language in film and TV, and using VR to simulate how cave paintings were perceived by their creators.
Labor and expense have dropped dramatically. Creating VR in 1962 (Sensorama) required an ungainly rig to record; now the technology is in every smartphone.
Algorithmic art was transparent, whereas today's large language models are inscrutable.
The CIVIT AR volumetric capture studio costs $500,000 with another $10-20k per minute of analyzed video.
Soon players will generate role-playing game avatars from their own bodies with just a smartphone.
Traditional rigging, keyframing, and tweening (eg, with Maya) is laborious.
Generative AI is faster and can accommodate transition states/edge cases.
Laser scanning to create a pointcloud is accurate but labor-intensive.
Neural radiance fields (NeRFs) are "fuzzier" but easier to make.
Just as YouTubers can attract millions of followers without a Hollywood budget, animators and game developers can create works without the hundreds of collaborators seen in end credits for a Pixar movie.
Meta and TikTok haven't figured out how to make money from repurposing 3d user data to model virtual worlds--yet. Re-creating interactive avatars of deceased relatives is a likely near-term application.
Yes, they have an incentive to develop content to sell headsets, even if that means a 3d recording of your 3-year-old walking across the kitchen. Meta VR headsets have "inside-out tracking" that captures images, and they are allowed legally to use them internally. This could let Meta make a map of your living room.
In some ways, it already has encroached on the idea space by crafting the messages you get in social media and elsewhere.
This presentation was recorded by the University of Maine's New Media program. For more information, contact ude.eniam@otiloppij.
Timecodes are in minutes seconds
Artificial intelligence has already revolutionized how we generate text and images, but 3d applications from animation to games to augmented and virtual reality are major industries ripe for disruption by new generative AI tools. This presentation includes a talk, demo, and question-and-answer session on the technical possibilities and ethical risks of this emerging field.
This presentation features software developer-artist John Bell, whose research focuses on collaborative creativity with a special focus on virtual environments. At Dartmouth Bell is Associate Director of the Data Experiences and Visualizations Studio, Associate Director of the Media Ecology Project, Manager of Dartmouth Research Computing‘s Digital Humanities Program, and Lecturer in Film and Media Studies. Bell is also a New Media alumnus who teaches in UMaine's Digital Curation graduate program. The conversation is moderated by Jon Ippolito, Professor of New Media, director of the Digital Curation program, and co-founder of UMaine's larger Learning With AI initiative.
This conversation took place on 15 November 2023 and is sponsored by the University of Maine's New Media program, which teaches animation, digital storytelling, gaming, music, physical computing, video, and web and app development. Learn more about New Media's hands-on webinars.
Watch the entire video or choose an excerpt from the menu on this page.