Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Processing 2 Creative Coding Hotshot.pdf
Скачиваний:
10
Добавлен:
17.03.2016
Размер:
5.28 Mб
Скачать

Project 2

Making Processing see

Our next task is to install the SimpleOpenNI library. This library enables us to use the OpenNI API in Processing. At first, we will learn how to access and display the depth image of the Kinect controller. Then, we will use the user tracking capabilities of the SimpleOpenNI framework and define some callback functions to get notified when a user is detected or when the tracked user disappears. And finally, we will color all the pixels in the depth image that belong to the user to enable the user to see what is being tracked.

Engage Thrusters

Let's teach Processing how to see:

1.At the time of writing, the SimpleOpenNI framework could not be installed using the new Library Manager. To install it manually, download the SimpleOpenNI package for your operating system from http://code.google.com/p/simple-openni/ downloads/list.

2.Unzip it to a folder named library in your sketchbook folder. You can find the path to the folder in the Preferences dialog.

3.Restart Processing if it is currently running, to make sure that the library is found by your installation.

4.Now, create a new sketch and click on Import library … under the Sketch menu to include the SimpleOpenNI library.

5.Add a setup() and draw() method to your sketch.

import SimpleOpenNI.*; void setup() {

}

void draw() {

}

6.In the setup() method, resize the sketch window to 640 x 480 pixels and define an

OpenNI context object. We use the enableDepth() method to start capturing the depth image and setMirror() to activate the mirror function.

import SimpleOpenNI.*;

SimpleOpenNI context;

void setup() { size( 640,480 );

35

The Stick Figure Dance Company

context = new SimpleOpenNI( this ); context.setMirror( true ); context.enableDepth();

}

7.Now add the following lines to your draw() method to make SimpleOpenNI load the next frame and calculate the depth image. Then, use context. depthImage() to get the data as a PImage object, and use image() to draw it to your sketch window.

void draw() { context.update();

image( context.depthImage(), 0, 0 );

}

8.Run your sketch and wave into the camera. You should see a depth image of yourself and your surroundings.

9.Turn on user detection in the setup method by adding the following line:

void setup() { size( 640,480 );

context = new SimpleOpenNI( this ); context.setMirror( true ); context.enableDepth();

context.enableUser( SimpleOpenNI.SKEL_PROFILE_ALL );

}

10.You also need a variable to store the userId of the first user that the SimpleOpenNI framework detects, so add a player variable and initialize it to -1 at the beginning of your sketch.

int player = -1;

11.To get notified when SimpleOpenNI has detected a user, we add the following two callback methods to our sketch:

void onNewUser( int userId ) { println( "new user " + userId ); player = userId;

}

void onLostUser( int userId ) { println( "lost user " + userId ); if ( userId == player ) {

player = -1;

}

}

36

Project 2

12.To see where Kinect has found a user, you need to make all the pixels that belong to the user appear in green in your image. You can do this by adding the following code to our draw() method:

void draw() { context.update();

image( context.depthImage(), 0, 0 );

loadPixels();

if ( player != -1 ) {

int[] userPixels = context.getUsersPixels( player );

for( int p = 0; p < width * height; p++) { if ( userPixels[p] != 0 ) {

pixels[p] = color( 0, green( pixels[p] ), 0 );

}

}

}

updatePixels();

}

13.If you run your code now, the detected user's pixels should turn green, like in the following screenshot:

37

The Stick Figure Dance Company

Objective Complete - Mini Debriefing

We have just learned how to access the depth image provided by Kinect using Processing. Starting with step 9, we used the tracking functions of the SimpleOpenNI framework. We added a callback function named onNewUser(), which gets called by the library every time a user is detected, and a second one named onLostUser() to receive a notification if the user isn't in the tracking range anymore.

In our draw() method, we used a pixel map that we got from the getUsersPixels() method to color all the pixels in the depth image green if they belong to the user and leave them unchanged if they don't. The array returned by the function getUsersPixels() contains a value of 0 for every non-user pixel and a 1 for the ones we want to color. For every 1 that is part of the user's pixels, we take the current gray value of the depth image and replace it with a green value of the same intensity.

Classified Intel

Apart from the infrared and RGB camera, the Kinect also comes with a motor control to allow for the adjustment of the camera tilt. Unfortunately, the OpenNI framework has no support for controlling the motor. There is a second open source framework called libfreenect, which can be used to access the Kinect. It lacks the user-tracking features of OpenNI but comes with support for motor control. So if you need to adjust the tilt level of your Kinect, because it's notorious for cutting off the heads or feet of your players, you can

install the libfreenect library and the demo applications and use them to adjust your Kinect.

Making a dancer

In the previous section, we used the user-tracking capabilities of the OpenNI framework to locate the user in the depth image provided by the Kinect infrared camera. Now we will take it one step further and locate the body parts of the player. The feature we are going to use in this task is called skeleton tracking. The OpenNI skeleton tracker locates certain key points of a human body, which we will use to construct our stick figure. For each player, the Kinect can see and get the location of the head, neck, torso, shoulders, elbows, hands, hips, knees, and feet.

We are also showing the image of the infrared camera and the user pixels we used in the last section as a little heads-up display (HUD) so that the player is able to see what Kinect is tracking.

38

Project 2

Engage Thrusters

1.We need to create a new Processing sketch and import the SimpleOpenNI library. Then we need to add a setup() and a draw() method.

import SimpleOpenNI.*;

void setup() {

}

void draw() {

}

2.We create three methods that are called by our draw() method and used to draw the HUD, the dancers, and the dancefloor.

void drawHUD() {

}

void drawDancer() {

}

void drawFloor() {

}

3.In our draw() method, we need to add calls to the three methods we just created. Since the Kinect returns coordinates using a different coordinate system than

the one Processing uses (it does this by default), we need to rotate and scale our viewpoint before we call the drawDancer() and drawFloor() methods.

void draw() { background( 255 ); context.update();

drawHUD();

translate( width/2, height/2, 0 ); rotateX( PI );

scale(0.5);

translate(0, -100, -400); rotateY( radians( 30 ));

drawDancer();

drawFloor();

}

39

The Stick Figure Dance Company

4.Now we need to add a SimpleOpenNI context to our script and initialize it in the setup() method. We also need to set the size of our sketch to 1024 x 768 pixels.

SimpleOpenNI context; int player = -1;

boolean calibrated = false;void setup() { size( 1024, 768, P3D);

context = new SimpleOpenNI( this ); context.enableDepth();

context.enableUser( SimpleOpenNI.SKEL_PROFILE_ALL);

}

5.To create the HUD, we need to add the onNewUser() and onUserLost() callback methods, like we did in the previous section. As we also want to use the skeleton tracker, we start the skeleton calibration in the onNewUser() function.

void onNewUser(int userId) { println("onNewUser - userId: " + userId); println(" start pose detection");

if ( player == -1 ) { player = userId; calibrated = false;

context.requestCalibrationSkeleton(userId, true);

}

}

void onLostUser( int userId ) { if ( player == userId ) {

println( "lost user" ); player = -1; calibrated = false;

}

}

6.We also add a onEndCalibration() callback function that gets called when the calibration is finished. If the calibration is not successful, we can try to find the user for the second time by starting pose detection.

void onEndCalibration(int userId, boolean successfull) { println("onEndCalibration - userId: " + userId + ",

successfull: " + successfull); if ( player == userId ) {

if (successfull) {

println(" User calibrated !!!");

context.startTrackingSkeleton(userId);

40

Project 2

calibrated = true;

}else {

println(" Failed to calibrate user !!!");

println(" Start pose detection"); context.startPoseDetection("Psi", userId);

}

}

}

7.If SimpleOpenNI detects the requested pose, the onStartPose() function gets called. So let's add this function to our sketch to start the calibration again when pose detection is successful.

void onStartPose(String pose, int userId) {

println("onStartPose - userId: " + userId + ", pose: " + pose); println(" stop pose detection");

if ( player == userId ) { context.stopPoseDetection(userId); context.requestCalibrationSkeleton(userId, true);

}

}

8.Now we display the image of the infrared camera and color the pixels we get from the getUsersPixel() method. We want the image of the infrared camera to be displayed only in the top-left corner this time, not the entire sketch window. We also need to adjust the pixel coloring code that we used in the previous sketch. We only take every fourth pixel of every fourth row to shrink the image. Add the following code to the drawHUD() method:

void drawHUD() {

image( context.depthImage(), 0, 0, 160, 120);

loadPixels();

if ( player != -1 ) {

int[] up = context.getUsersPixels( player ); for ( int y = 0; y < 480; y += 4) {

for ( int x = 0; x < 640; x += 4) { if ( up[ y * 640 + x ] != 0 ) {

float g = green( pixels[ y * 1024 / 4 + x / 4 ]);

if ( calibrated ) {

pixels[ (y/4) * 1024 + x/4 ] = color( 0, g, 0 );

}else {

pixels[ (y/4) * 1024 + x/4 ] = color( g, g, 0 );

}

41

The Stick Figure Dance Company

}

}

}

}

updatePixels();

}

9.The next method we are going to implement is the drawDancer() method. Since we have activated the user skeleton tracker, we can now start using the coordinates of the limbs and joints and draw lines between them. For the head, we will use a sphere that is scaled down along the x and z axes to turn it into an ellipsoid, like in the following diagram:

Head

 

Neck

 

Shoulder

 

Hand

Torso

 

Hip

Elbow

 

Knee

 

Foot

 

10. We need to add the following code to our sketch to draw the head and the torso:

void drawDancer() {

if ( player != -1 && context.isTrackingSkeleton( player )) { pushMatrix();

scale( .1 ); stroke(0); strokeWeight(2); fill(0);

PVector v1 = new PVector(); PVector v2 = new PVector();

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_HEAD, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_NECK, v2 );

pushMatrix();

translate( v1.x, v1.y, v1.z ); scale( .5, .5, 1);

42

Project 2

sphere( v1.dist( v2 )); popMatrix();

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_NECK, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_SHOULDER, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_NECK, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_SHOULDER, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_NECK, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_TORSO, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

11. We then add the following code to draw the arms:

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_SHOULDER, v1 ); context.getJointPositionSkeleton( player,

SimpleOpenNI.SKEL_LEFT_ELBOW, v2 ); line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_ELBOW, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_HAND, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_SHOULDER, v1 ); context.getJointPositionSkeleton( player,

SimpleOpenNI.SKEL_RIGHT_ELBOW, v2 ); line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_ELBOW, v1 ); context.getJointPositionSkeleton( player,

SimpleOpenNI.SKEL_RIGHT_HAND, v2 ); line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

43

The Stick Figure Dance Company

12. And finally, we add the following code to draw the hips and legs:

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_TORSO, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_HIP, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_TORSO, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_HIP, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_HIP, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_HIP, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_HIP, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_KNEE, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_KNEE, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_LEFT_FOOT, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_HIP, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_KNEE, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_KNEE, v1 );

context.getJointPositionSkeleton( player, SimpleOpenNI.SKEL_RIGHT_FOOT, v2 );

line( v1.x, v1.y, v1.z, v2.x, v2.y, v2.z );

popMatrix();

}

}

44

Project 2

13.So far, our dancer is getting tracked and drawn, but if you run the code now, it will float in a void. To change this, add the following code to the drawDancefloor() method to give our dancer something to dance on:

void drawFloor() { noStroke(); fill( 128 );

beginShape(QUADS);

vertex( -400, -100, -400 ); vertex( -400, -100, 400 ); vertex( 400, -100, 400 ); vertex( 400, -100, -400 ); endShape();

}

14.We need to run our sketch now. Our stick figure will follow our dance moves, like in this screenshot:

45

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]