We are looking for someone experienced building NN's, specifically image and image sequenced (or video)focused would be ideal!
Simulating CGI fire and smoke for visual effects in film is a time consuming process. We're looking for someone to help build a solution that accelerates this process by creating a 2D detail-enhancing/image-upscaling tool that specializes in smoke and fire.
The program should be able to take in a sequence of images (PNG, TIFF or EXR)
Any fire and smoke visible in the image sequence will then be enhanced with extra details
The details in the smoke and fire should be temporally stable (so that the new details are not jittery or artificial looking, they must follow the motion of the smoke in a natural way)
The program must preserve alpha transparency (RGB+A)
In the film industry, often it is impractical to use real-world explosions on set, whether it is for safety or cost reasons. Instead, artists will simulate explosions using 3D programs and the final result will be a sequence of images of the effect, whether that is an explosion, smoke plume, shockwave, sandstorm or fire.
These then overlay the original footage using an editing software.
However, these digital simulations are often not of the highest quality, which can be due to a lack of hardware or time. This is where we hope that the neural-network based solution will help us, by taking the 2D images and synthesizing new details into the effect.
Please reply with "Know" in your subject to show you have read this post thoroughly.
Smoke and fire follow specific behaviors which we want this program to be able to account for. For example, if the image sequence is of a large plume of smoke, we hope that this program can add micro-turbulence and vorticities which you would expect to see in the real world.
We can help to collect any data sets necessary to achieve this. Per-pixel detail is the goal!
Please let us know any questions you have.