Delusional Perception





Insight


Being inspired by Michael Murphy's installation "Perceptual Shift", an idea of a code is risen which can turn any 2d image into a pile of 3d particles. This process can be also interpreted as a point cloud, but it is different. Point cloud basically gives you a 3d geometry, but the effect here is a set of chaotic floating particles with no particular order. When viewed from the right angle the audience can see an image. The name of the code, delusional perception, is picked after a symptom which according to Oxford Reference is "a schneiderian symptom in which a person believes that a normal percept (product of perception) has a special meaning for him or her. For example, a cloud in the sky may be misinterpreted as meaning that someone has sent that person a message to save the world. While the symptom is particularly indicative of schizophrenia, it also occurs in other psychoses, including mania (in which it often has grandiose undertones)."



MEL to Python


There was already a MEL code that collects the RGB value of each pixel of a picture and turns them into spherical particles on a surface. So a conversion from MEL to Python was needed.
The only complication of the new python code was an additional feature, which was to have the freedom of generating different shapes, instead of just a sphere. The solution is as follows:

  masterParticle = str ("cmd.") + particleShape + str ("()")
  exec masterParticle
 

Another additional feature to the python code is the ability to create a plane automatically, which is the same size proportionally to the original image. The final converted code at this stage is as follows:

def floatingParticles (filePath, particleShape = 'sphere', particleSize = 0.1, spacing = 0):

    # importing modules and function attributes
    import maya.cmds as cmd
    import math
    from pymel.core import mel

    # creating the shading group
    filenode = cmd.shadingNode ('file', asTexture = True)
    shader = cmd.shadingNode ('lambert', asShader = True)
    cmd.connectAttr ('%s.outColor' %filenode, '%s.color' %shader)
    cmd.setAttr ('%s.fileTextureName' %filenode, filePath, type = 'string')

    # creating the surface and assigning the shading group
    fileWidth = cmd.getAttr('%s.outSizeX' %filenode)
    fileHeight = cmd.getAttr('%s.outSizeY' %filenode)
    nurbsName = 'source'
    nurbsWidth = fileWidth / 100
    nurbsHeight = fileHeight / 100
    nurbsHeightRatio = fileHeight / fileWidth
    object = cmd.nurbsPlane (name = nurbsName, axis = [0, 1, 0], width = nurbsWidth, lengthRatio = nurbsHeightRatio)
    cmd.select (object)
    cmd.hyperShade (assign = shader)
 
    # texture map samples data
    nu = int(nurbsWidth / particleSize / 2 * (1 - spacing*5))
    nv = int(nurbsHeight / particleSize / 2 * (1 - spacing*5))
    cmd.select (filenode)
    rgb = cmd.colorAtPoint (output = 'RGB', samplesU = nu, samplesV = nv)
    deltaU = 1.0 / (nu - 1)
    deltaV = 1.0 / (nv - 1)
    currU = 0
    index = 0
 
    # creating particles
    shapes = []
    masterParticle = str ("cmd.") + particleShape + str ("()")
    exec masterParticle
    cmd.rename('particle')
    for n in range (0, nu):
        currU += deltaU
        currV = 0
        for m in range (0, nv):
            r = rgb[index]
            g = rgb[index + 1]
            b = rgb[index + 2]
            sumColor = r + g + b
            if (sumColor < 2.8):
                p = cmd.pointOnSurface (object[0], u = currU, v = currV, position = True)
                x = p[0]
                y = p[1]
                z = p[2]
                currentShape = cmd.instance('particle')
                cmd.select(currentShape[0])
                cmd.scale(particleSize, particleSize, particleSize)
                cmd.move (x, y, z)
                shapes.append(currentShape[0])
            index += 3
            currV += deltaV
    cmd.group (shapes, name = 'floatingParticles')
 
    # hiding the plane and the master particle
    cmd.select(object[0])
    cmd.rename('masterShot')
    cmd.hide()
    cmd.select('particle')
    cmd.rename('masterParticle')
    cmd.hide()

 


Placement of the Camera

For the audience to see the particles from the right angle, a camera should be generated already. According to the focal length of the camera's lens and the size of the input image, a camera is going to be generated, in which its frame are aligned horizontally or vertically with the edges of the source image.

Of course a variable for the focal length is defined by the name of "cameraFocalLength", in case the end user wants to change the default focal length of the camera, which by default in most 3d applications is 35 millimeter. Accordingly, a new location for the camera is set for the picture to be aligned in the frame.




Placement of the Particles

Because we are viewing the whole scene from a perspective view, we have a vanishing point. What's vanishing point? To put it into simple words, basically it means when objects move farther away from the camera or an observer, they tend to get smaller and slightly move to the vanishing point. The best example for this is the train and the railroads. So, for this code, making objects smaller is as a matter of fact what is needed: when we have more value in RGB, the particles gets smaller and smaller. But the problem of moving the particles in depth is the slight movement to the vanishing point, because the final image is going to be distorted.
To solve this problem, first, the view angle of the camera should be calculated. This calculation is done by defining the size of the image censor in the camera, which is always a 36x24 millimeter chip in most 3d applications (equal to a 35 millimeter film), in addition to the focal length of the lens in the camera, which by default in most 3d applications is 35 millimeter. By having the censor size and focal length, the view angle of the camera is calculated. Now with a little bit of math, we finally know at what angle the particles should move to stay intact in their original position, rather than moving towards the vanishing part.





Conclusion

The further improvement for this code is the usage of nParticles instead of instances of an object. Also, apart from a digital output, this code can be developed to be used to create a data sheet for an actual installation of floating objects in a gallery.




Delusional Perception User Interface




After importing the image, you have the option to pick a geometry to be asigned to the particles. The default shape is a sphere. Then you can pick a size. The smaller size, the more this script is going to generate shapes and longer the process is going to take. Color range is the range of pixels going to be calcualted to have partilces. If it's set to 1, it is going to calculate every color of the image. The depth is the distance the particles are going to be spread in depth of course. And finally, the focal length, which by default is set to 35, is the camera's focal length and depending on that, the script is going to move the camera so the image is going to be aligned in its frame.







Blobbymobile




Challenge


For this assignment, we are challenged to write a script which later can be executed by Renderman to create a blobby effect on vertices of any 3d geometry. This process is done with the help of four scripts: (1) MEL User Interface, (2) RMAN User Interface, (3) RMAN Code, (4) Blobby Effect Hard Code.
The pipeline for the four scripts is as follows:


I developed the hard code further, so in addition to create bubbles on each vertices, overtime, they float in the air and burst at a maximum amount which will be defined by the user to see some kind of a boiling effect.
The parameters which can be tweaked with the effects are as follows:

Radius is the default size of each bubble.
Randomized Size is the amount of randomization of each bubble.
Maximum Size is the radius by which the bubbles would burst upon reaching that.
Stickiness makes the bubbles stick together, rather than floating in the air.


Renders


First, I tested the render on a simple sphere, and tweaked the parameters to see how the user can achieve a boiling-like effect.


For the beauty shot, I downloaded a Batmobile from Turbosquid.com, a model by Danish Riaz, and applied the code to its surface.






Obstacles


The final result of the script has an issue with the movement of the bubbles in Y axis which needs to be solved in the future. When I applied it to a moving object, like a HumanIK, I realized that the movement of the bubbles are relative to the position of vertices, no to the world. Thus, in case of a moving object, the bubbles will not be floating relative to the world, but relative to the moving object, which in our case, is not what we want. Here's a render of a HumanIK, which I made it look like a boiling mummy.



Improvements


As an improvement to this code, instead of bubbles, we can have read an external geometry, like in our case, some tiny bats, to make the result more interesting.


Stylized Trees




This project is actually a follow-up to the previous assignment, the splatting procedure, but instead of bubbles, it's going to populate a tree with leaves.

But before getting to that, how about creating a tree with mel?

There are of course ways to create beautiful trees, using L-System for instance, but just a simple script to create a tree in Maya is not impossible. That's why I started my project by developing a code first, to create a tree! Then Renderman will populate the tree with leaves; which are going to be read from a Rib Archive of a pre-modeled leave.


Tree Creation Script


The procedure is really simple. The script is going to create a cylinder, which is configured by the user for its attributes such as divisions or radius. Then its edge loops are going to offset to make it more natural and organic rather than a simple tube. Now we have the trunk of the tree. Next, the script is going to create branches by instancing the trunk as branches and put them on the vertices of the trunk. The range of rotation picked for the branches is between -70 and +70 degrees, because basically the branches tends to grow towards the sun and the sky.
The script is going to create one step of the branches, but it is used to create one more step (Recurse once) and the process took longer than anticipated (About 5000 instances), that's why the step part is removed from the script.



The final MEL script for creating a tree (just one step) is as follows.

///treeCreation by Mazyar Sharifian on 10/25/15
  
global proc treeCreation (float $radius, int $xDivision, float $height, int $yDivision, float $branchGrowth){
  
    ////cylinder
    string $object[] = `polyCylinder -r $radius -h $height -sx $xDivision -sy $yDivision -sz 1 -ax 0 1 0 -rcp 0 -cuv 3 -ch 1 -n "trunk"`;
  
    ////operation of creating a trunk out of the cylinder
    for ($n = 1; $n < $yDivision; $n++){
        float $rand = $n * `rand (-$radius/2) ($radius/2)`;
        int $vtxStart = $n * $xDivision;  
        int $vtxEnd = $vtxStart + $xDivision - 1;
        string $selectedVtx = $object[0] + ".vtx[" + $vtxStart + ":" + $vtxEnd + "]";
        string $objectScalePivot = $object[0] + ".scalePivot";
        string $objectRotatePivot = $object[0] + ".rotatePivot";
        select $selectedVtx;
        scale -r 0.5 1 0.5;
        move -r $rand ($rand/$yDivision) $rand;
        move 0 (-$height/2) 0 $objectScalePivot $objectRotatePivot;
    }
  
    ////operation of filling the trunk with branches
    int $vertices[] = `polyEvaluate -v $object[0]`;
    string $instances[];
    string $branchName;
    clear ($instances);
  
    for($i = 0; $i < $vertices[0]; $i++){
        string $vtx = $object[0] + ".vtx[" + $i + "]";
        float $vtxPosition[] = `pointPosition $vtx`;
        if(($vtxPosition[1]+$height/2) > `getAttr ($object[0] + ".boundingBoxSizeY")` * (1 - $branchGrowth) && $vtxPosition[1] != `getAttr ($object[0] + ".boundingBoxSizeY")`){
            $insance = `instance $object[0]`;
            appendStringArray ($instances, $insance, 1);
            $randRotationX = `rand -70 70`;
            $randRotationZ = `rand -70 70`;
            $randSize = `rand 0.1 0.6`;
            select $insance;
            scale -r $randSize $randSize $randSize; 
            rotate -r $randRotationX 0 $randRotationZ;
            move -r $vtxPosition[0] ($vtxPosition[1]+$height/2) $vtxPosition[2];
        }
    }
  
    $branches = `group -n "branches" $instances`;
    group -n "tree" $branches $object[0];
  
}
  
////seed(1)
treeCreation (0.5, 3, 10, 10, 0.5);





Populating the Tree with Leaves


To populate the tree with leaves, we will be using RiMel. First the code is going to load a RIB model of a leaf, then it is going to populate each vertex of the tree with a leave. The user can specify the size of each leaf, in addition to randomness, and the percentage of each branch getting populated by the leaves.



This process is executed by Renderman, and the benefit of using it, can be generating leaves for multiple trees in different shapes, rather than modeling actual leaves on each model of a tree individually.









Obstacle


The concept of normals was an issue for me, because normals of a vertex is complicated. It has different vectors, and you need to make an average, and then again, convert the average to degrees, because it's a vector. Now what is tricky is the rotation sequence, because when you rotate a shape once, the second time you rotate it, it's going to rotate it from the new position in the space. That's why it took time for me to understand how to get the degrees from the average normal, but before finishing it, I found out there are already mel scripts provided by Professor Malcolm Kesson Axit to Vector. But anyway, here's a glance of my struggle:

Something else which is worth mentioning is the usage of "pointPosition" with different combinations of Local and World, in addition to using RiMel command: Identity. The only correct result was without using Identity and setting the "pointPosition" into Local.

Solar System




Challenge


The challege here was to create geometry at render time, with rendemran, out of X, Y and Z coordinates of other geometeries. We are asked to either use a motion capture database as our source geometry or create a database by ourself. I decided to work with particles. So I started with a mel/python script in www.fundza.com to convert the path of each particle in a system into a curve. The only problem I encountered was that it just created one rib file at the end of the animation. Basically how I solved the issue, was to generate a rib sequence instead of a single rib file.



To load the rib sequence, simply add a $F instead of the number of the frame, and remember that if you are planning to render your scene in renderfarm, make sure to use a relative link to your project folder and everything's sorted.





Concept


For the render, first I decided to work on a larger scale and hence chose a part of our cosmos. I started with visualizing the path of all the planets of the solar system, including our sun, through the milky way galaxy. I got inspired by a video from DjSadhu and add a correction to the movement as well, by implementing the 60 degree lean of the orbiting planets, instead of a 90 degree helical movement around the sun.

For the setup, I used nParticles representing each planets and the sun in our solar system, and applied vortex field to them in order for them to orbit around the sun. I didn't use the actual size or velocity of each planet, because visually it wouldn't be interesting.



I also matte painted the environement as well by using a couple of images and made it tiled so I can repeat it in any shape I want, like the sphere I am using right now as the environment layer.




God Particle


As a second experiment, I worked in the quantum realm and tried to simulate the moment of discovering the God Particle (Higgs) in CERN. As references, I googled a lot of illustrations for their color pallette, and for the motion, I got inspired by the CERN scene in "Angels and Demons" movie.





Conclusion


Using renderman as a bridge to create extra geometry at render time is incredibly useful both time-wise and aesthetic-wise. For this specific project, I will develop it further to generate a 360 degree loop animation from the solar system movement, since a hack was introduced in SIGGRAPH 2015 by a google developer Mach Kobayashi for Renderman to create a 360 degree render out of your scene.