Python Worldpositioncombiner – Proof of concept

So I thought about creating a script that combines all the WP Data of a sequence and gets rid of all the unused pixels/double entries. The idea was to create a new File so a whole sequence doesn’t need to be loaded/the pointcloud recalculated when switching a frame. The Script works – but it doesn’t scale well. The python options to write individual pixel data are not really available and the sample() function to check the input images is quite slow. The bigger the files/sequences get, the worse the stability/performance gets. So it’s pretty much unusable in production but I see it more as a proof of concept now.

How it works:
1. the Script analyses all pixel data with content (where alpha > 0)
2. it saves all new worldposition data and its according beauty rgba in a list
3. if there is already an entry for a worldposition pixel, it’s not writing a double entry and just ignores it
4. the process is done for each pixel of the whole sequence and a new file gets created
5. all the unique World Positions are saved in the condensed file

Whats happening:
In this 10 frame sequence with 5×5 px we can see a camera move left/right. There are 3 objects in the scene that the camera passes. In the whole sequence those 3 objects are visible in 23 pixels. If we use the image sequence as input for the position to points node and play it through, it needs to calculate 250 pixels in total even though there is only information in 23 of them. The script combines those 23 datablocks into one image and reduces the recalculation of the node to 1 time instead of 10. Since all the position pixels are in one file, the pointcloud of the whole scene is visible instead of just the camera view of the frame.

Viewport:
In the top left you can see a contactsheet of the Beauty and WP of the sequence
In the bottom left is the generated imagestrip with only the 23 pixels of the beauty and WP
In the middle/top image you see the position to points node working with the sequence input
In the middle/bottom you see the position to points node working with the condensed input

Issues:
To get the pixel data from the sequence you pretty much have to use the sample() function in a for loop which checks every pixels content. It is stable but gets really slow if the inputs get bigger – close to unuseable. The main issue is the lack of a ‘write individual pixel data’ function in nuke/python. There is no built in way to set values like the sample() function gets the values. This is pretty much the reason why the whole thing isn’t scaleable. You have to take absurd/shitty workarounds to make this happen and it’s simply not worth it. Even after asking the official Foundry Team on twitter and checking with people in the nuke.python forum it seems like there is no real solution with python. (Thread can be found > here < ). Unfortunate but this whole thing was still a good practice and maybe at one point I will look into making a standalone solution outside of nuke.

The core concept of this script are the following lines:

analysing and adding the values:

for frame in InputRange:
for line in range(0, maxX):
for row in range(0, maxY):
if InputPic.sample(“a”, line, row) != 0:
if [InputWPos.sample(“red”, line, row),InputWPos.sample(“green”, line, row),InputWPos.sample(“blue”, line, row)] not in pixelsWPos:
pixelsBeauty.append([InputPic.sample(“red”, line, row),InputPic.sample(“green”, line, row),InputPic.sample(“blue”, line, row)])
pixelsWPos.append([InputWPos.sample(“red”, line, row),InputWPos.sample(“green”, line, row),InputWPos.sample(“blue”, line, row)])

ugly write process:

for amount in InputRange:
ReadFileWPos.knob(“reload”).execute()
pixelsWPos[amount].extend([1])

Cropmasking.knob(‘box’).setValue([1+amount,0,0+amount,1])
ValuechangerWPos.knob(‘value’).setValue(pixelsWPos[amount])
nuke.execute (‘OutputFileWPos’,start=1,end=1)