Jump to content
v4friend

Update FaceTrackNoIR

Recommended Posts

To the FaceTrackNoIR devs,

Thanks heaps for your hard work. After the 1.7 update and hotfix, the product is by far the best it has ever been. I have found the response to be very smooth with little hit on CPU performance. I don't understand the inner workings of what you have done, but it seems the new filter, latest FaceAPI \ code tweaking has done wonders.

Thanks again for your work.

Cheers

Rough Knight.

Share this post


Link to post
Share on other sites

hey has anyone got a good profile for pointtracker

thx in advance

Edited by spoonbe

Share this post


Link to post
Share on other sites

There's no single 'right' profile for PT available. It will always vary on your point model's dimensions, min/max blob size, threshold. Without understanding what each of these does, it won't work properly.

Share this post


Link to post
Share on other sites

Thanks for this great program! The PS3 eye cam is a dream to use! I am using face tracking and it works great in arma2 and FC2!

Share this post


Link to post
Share on other sites

Hi All

Big Thanks to FaceTrackNOIR, Freetrack and Carl Kenner.

Below is the C++ code for a Kinect (Xbox 360) sensor to communicate with FaceTrackNOIR via UDP.

At the bottom is C# code using the FaceTrackingBasics-WPF example - it's slower.

Enjoy:)

Please Note:

  • I'll post a link to the source code and the exe soon.
  • On I7 920 this code takes 30% CPU.
  • Will not work well if your in direct sunlight.
  • Tested on Windows 7 64bit only.
  • I'm not a C++ coder it was the most efficient example I found out of the SDK for face tracking.
  • I'm working on a one pass per frame HeadPose, Currently 6% CPU will post when done.

You'll need:

  • FaceTrackNoIR
  • XBox (needs special power adapter) or Windows Kinect
  • Visual Studio Express
  • Kinect Drivers
  • Kinect SDK 1.7
  • Once the drivers are installed use the sdk to play/test the setup.
  • Install the C++ Face Tracking Visulization from the Kinect Toolkit
  • Open it up in Visual Studio

Add Lib for Sockets:

Right Click SingleFace properties. (this is the project to work on)

Select Linker/Input

Click Additional Dependencies then click on Edit

Add Ws2_32.lib under Kinect10.lib

Setup Headers for sockets:

Find stdafx.h under Headers

Double Click to open

// Windows Header Files:

#include <winsock2.h> <=== Add this line

#include <windows.h>

#include <Shellapi.h>

Open SingleFace.cpp:

Find:

void SingleFace::FTHelperCallingBack(PVOID pVoid)

Goto end of sub eg

<<<== Insert Code Here

}

}

}

Insert the code below above the 3 curly brackets:

Please Note these two lines:

si_other.sin_port = htons(5550); // This is the Port FaceTrackNOIR is expecting Data From

si_other.sin_addr.S_un.S_addr = inet_addr("127.0.0.1"); // This is local host eg your PC

// SEND UDP Data thru Socket to FaceTrackNOIR

struct sockaddr_in si_other;

int s, slen=sizeof(si_other);

WSADATA wsa;

//Initialise winsock

printf("\nInitialising Winsock...");

if (WSAStartup(MAKEWORD(2,2),&wsa) != 0)

{

printf("Failed. Error Code : %d",WSAGetLastError());

exit(EXIT_FAILURE);

}

printf("Initialised.\n");

//create socket

if ( (s=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == SOCKET_ERROR)

{

printf("socket() failed with error code : %d" , WSAGetLastError());

exit(EXIT_FAILURE);

}

//setup address structure

memset((char *) &si_other, 0, sizeof(si_other));

si_other.sin_family = AF_INET;

si_other.sin_port = htons(5550);

si_other.sin_addr.S_un.S_addr = inet_addr("127.0.0.1");

double test_data[6];

//Translation XYZ

test_data[0] = (double) translationXYZ[0]; // Yaw

test_data[1] = (double) translationXYZ[1]; // Yaw

test_data[2] = (double) translationXYZ[2]; // Yaw

//Rotation

test_data[3] = (double) rotationXYZ[1]; // Yaw

test_data[4] = (double) rotationXYZ[0]; // Pitch

test_data[5] = (double) rotationXYZ[2]; // Roll

int err_send;

//send the message

err_send = sendto(s, (const char *) test_data, sizeof( test_data ) , 0 , (struct sockaddr *) &si_other, slen);

if (err_send == SOCKET_ERROR)

{

printf("sendto() failed with error code : %d" , WSAGetLastError());

exit(EXIT_FAILURE);

}

closesocket(s);

WSACleanup();

  • Press F5 to run the application
  • Start FaceTrackNOIR
  • Change the Source to: FaceTrackNOIR UDP
  • Port: 5550 under settings

Any Network Windows popups click allow for private network.

Problems:

  • Test Kinect Samples work ok.
  • Test FaceTrackNoIR UDP works ok with a laptop with camera.
  • Sit down in the sun for awhile and soak up the rays.

C# Code using the FaceTrackingBasics-WPF example:

using System.Net;

using System.Net.Sockets;

using System.Text;

internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage, Skeleton skeletonOfInterest)

Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram,ProtocolType.Udp);

IPAddress send_to_address = IPAddress.Parse("127.0.0.1");

IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 5550);

double[] Rots = { 0, 0, 0, frame.Rotation.Y, frame.Rotation.X, frame.Rotation.Z };

int doubleSize = sizeof(double);

byte[] send_buffer = new byte[6 * doubleSize + 4];

for (int i = 0; i < 6; ++i)

{

byte[] converted = BitConverter.GetBytes(Rots);

//if (BitConverter.IsLittleEndian)

//{

//Array.Reverse(converted);

//}

for (int j = 0; j < doubleSize; ++j)

{

send_buffer[i * doubleSize + j] = converted[j];

}

}

//Filler Terminating characters?

send_buffer[48] = 0;

send_buffer[49] = 0;

send_buffer[50] = 0;

send_buffer[51] = 0;

sending_socket.SendTo(send_buffer, sending_end_point);

Share this post


Link to post
Share on other sites

Can anyone explain the ideal dimensions (i.e. distance between the three LEDs in all the planes) for both a clip and cap model please? I really can't understand how to calculate them myself.

Looking at the defaults from the Freetrack manual, it has the side LEDs +/-70mm to either side of the top LED (Z plane), 100mm in front of the top LED (Y plane) and 80mm below the top LED (X plane).

For the 3-points clip model, it has the upper and lower LEDS at +60/+60 and -80/+80 respectively (X plane/Y plane).

I'm not sure if either of these are still ideal for FTNoIR though and it's confusing as I've seen so many differently dimensioned models.

For myself I'm thinking of a variation of the cap model, with one LED mounted on top of my headphones and the other two on either side, although I think I might need to have the top one raised above the headphones perhaps. Then, for my Dad who doesn't use headphones I'll probably have to do some sort of cap model.

EDIT: I notice the defaults in FTNoIR for the clip are +40/+30 and -70/+80 which is different from the Freetrack defaults, so I'm wondering if this has been found to be better?

Edited by doveman

Share this post


Link to post
Share on other sites
@doveman

Maybe these links will help?

Thanks but that's just a load of pictures of different size and shape designs and some electrical information, nothing which tells me which dimensions I should use for best results.

EDIT: Phew, the Freetrack is back up again. I thought we might have lost it for good :eek:

I've printed out a clip template using the dimensions in my last post and cut it out on some stiff cardboard, which looks vaguely like this http://www.free-track.net/images/point_model_gallery/tracker_02.jpg. The problem I have now is attaching it to my headphones. I have the Samson SR850 http://www.samsontech.com/samson/products/headphones/sr-series/sr850/ and I did think of sticking a round velcro pad on the round part where it says Samson and another on the back of the clip but I don't think that would hold it in position perfectly. If I moved the clip up I could ziptie it to the two parallel metal bars in a couple of places (above the top black plastic lump and in between the two) but then the clip won't be straight when I'm wearing the headphones and will be slanting so that the 3 LEDs won't be on a straight vertical plane and I imagine that this won't work.

So if anyone has any ideas, please chip in.

Edited by doveman

Share this post


Link to post
Share on other sites

You'll need a Kinect, FaceTrackNoIR 1.7 and Kinect Drivers 1.7

Please note:

- This uses IR for depth so can be used in the dark

- Direct sun On your face will cause it to loose tracking.

Kinect Drivers and SDK

http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx

Set Source in FaceTrackNoIR to UDP port 5550

Link to EXE

https://docs.google.com/file/d/0B5OZ5Wi1KO5nRnlXRDFmOVBaQ2M/edit?usp=sharing

The default settings are: (SingleFace.ini)

[serverSettings]

SERVER=127.0.0.1

PORT=5550

Link for Kinect UDP C++ Source (SingleFace Project)

https://docs.google.com/file/d/0B5OZ5Wi1KO5nUVZqUVczM0VhVk0/edit?usp=sharing

Enjoy :)

Edited by VeryWoolly
Added notes about IR Depth and sunlight

Share this post


Link to post
Share on other sites

Ok here goes as a guide.

Using an Xbox 360 Kinect (do not connect to comp until told.) If you want to use a regular webcam go to step 7.

1. Make sure you have the correct Kinect, you need to have the power adaptor that looks a bit like this to be able to plug it into a PC.

2. Download the SDK and Tool Kit for Kinect from Windows and install.

3. Now plug in the kinect, windows should install all the drivers (may take few mins)

4. Test the kinect, start the "Developer Toolkit Browser" this will be in your start menu under SDK For kinect.

5. Goto the tools and run one of the Kinect Explorers (not the studio) this will let you see if its working and adjust the angle of the camera.

6. close it all down and install this this is what will allow the Kinect to be used as a webcam. You will probably need to restart the comp first and it normally starts by default on startup.

7. Now that you have a webcam goto this nifty wee thing you also need this the demo will do and i just put rubbish in the name address etc

8. Now start your webcam, launch Facetracker demo, you will see how it works here. Then when ready start the FaceTrackNoIR, from this you can adjust all sorts of settings.

9. Testing- Click start on the FaceTrackNoIR, you should see a little picture of your self with the facial tracking. Start Arma and just create a editor with a single man. Once in game you can goto options, controls, and controllers you will see 2 regarding a joystick controller already enabled. Enable the one called Track IR, the Freetrack dont seem to work well.

Start a Mission and mess around!

Im still adjusting it but it seems good, it is a little laggy for me atm but im sure i can change that somewhere.

Share this post


Link to post
Share on other sites

I've built the +60/+60, -80,+80 3-point clip now and it's tracking the LEDs OK but I'm finding it's giving Roll when I Pitch and Z when I Yaw.

If I disable all except Pitch and Yaw (I have to zero the Roll curve as well as the tickbox doesn't disable it when Pitch is enabled) it's pretty awesome, particularly after having no luck with face tracking due to inadequate lighting in my room. I would like to be able to use at least some of the other axes if possible though, so if anyone can help me work out what I'm doing wrong I'd be grateful.

It can lose tracking when I turn my head a lot to the right (the clip is on the right side of my head) but I guess that's understandable as the LEDS are going out of view of the camera by then. I've found that with my PS3 Eye, setting it to the zoomed in mode minimises this problem (with face tracking I found it worked best zoomed out) and I can't imagine I'll be turning my head so extremely when gaming anyway, so it's not an issue.

EDIT: I seem to have got it working OK in DCS, perhaps by setting the X offset to 50mm to account for the fact that the clips mounted on the side of my head. I left Roll disabled but don't really need that in DCS anyway I don't think. :cool:

It's sort of working in ArmA as well but it zooms in when I turn my head left and zooms out when I turn my head right, so if anyone's got some good settings/curves for this, please share :)

Edited by doveman

Share this post


Link to post
Share on other sites

The first point you have on your model is the pivot in the POSIT algorithm. It's not possible to assign a non-light-point pivot. Another option would be to use the P3P algorithm in order to obtain a non-light-point pivot. But! that requires a calibrated camera (intrinsic matrix), unlike 3-point POSIT which works with just a 'focal length' reference value, typically 500-1000.

In other words, the upper/middle point is the axis of rotation, not the centroid of your head.

Edit: It's possible, but messy/unpractical (?) to do POSIT, reproject the pivot point in world coordinates to 2D, then run POSIT on it instead of the original centroid, with updated coordinates. Just note that it'll cause errors to accumulate.

As for roll, I've explained to you previously through email how quaternion zeroing code Patrick wrote works. Note also that when yawing, you still maintain roll authority. So there'll always be a 'way' to yaw around while causing roll to still happen, just how, depends on the rotation axis.

Edited by sthalik

Share this post


Link to post
Share on other sites

Thanks, sure I'll read through your previous e-mails again. Been a while since I've had a play with it so forgotten a lot ;) As I say, I can live without Roll though.

I'm not sure what you're saying about the upper/middle point being the axis of rotation. Is it the upper LED or the middle one? If it's the middle one, that's pretty much in the center of my head on the y and z axis, just not the x, whereas the upper LED is offset on all three axis.

Share this post


Link to post
Share on other sites

The first point in the settings menu, in the point model, is the rotation axis. Other 2 points are practically offsets of it in world coordinates. You can check the .ini file for the first or zeroth point to find out which one is it. If it's the wrong one, you can use custom model to reorder them.

Either of the points is hardly centroids, though - to get no or almost no translation response, one would have to define a centroid (again, in world coordinates) inside their skull :) This is the true center of movement, either angular or positional.

Share this post


Link to post
Share on other sites

@sthalik

Using the Kinect 1.7 SDK I've managed to get Yaw, Pitch and Roll, X, Y Z and sending it to FaceTrackNoIR via UDP using IR depth.

The limitation is your face needs to be more than 800mm from the Xbox 360 Kinect and it takes 30% cpu on a i7-920.

So I've rolled my own:

- CPU 7% Verbose mode

- CPU 5% minimized

- Only Pitch and Yaw at the moment

- Face can be as close as 500mm from Kinect

- Uses raw depth data

I've got X, Y, Z data in Meters off a central point. eg X= -0.0112, Y=0.00453, Z=0.8765

What type of data does FaceTrackNoIR expect for X, Y, Z

Mostly interested in Z for zooming in and out for Arma 2.

Any help appreciated.

Cheers.

Share this post


Link to post
Share on other sites

That makes little sense. No way to describe a rigid transform using just 2 degrees of freedom.

Take a look at Point Cloud Library, it may be what you want.

Edit: also take a look at convolution, cross-correlation, template matching if you want to do anything useful with the depth info.

Edited by sthalik

Share this post


Link to post
Share on other sites

Actually I don't recommend anyone else to get involved with the project.

If you join and want to fix what's broken - there are tons of broken things - prepare to work in an implicit hierarchy, "because I say so and I'm the project lead" excuses, practically treated as unpaid labor.

Wish I never committed any code to the repo, if someone's unable to see code on its own merit instead of who wrote it or even acknowledge it's broken - why do it?

I've tried to fix the stability of the software by fixing all the rookie mistakes committed to date, but Wim Vriend doesn't acknowledge the dodgy state of the code's correctness, doesn't allow his poor judgement to be overruled in any case, commits sloppy code and pretends bugs don't exist. On one occasion my post was deleted because I pointed out a case of broken code being committed in the public forum.

So don't make the mistake I did, if you think we're equals, we're not. You won't be treated as one.

-sh

Share this post


Link to post
Share on other sites

@sthalik

Thanks for the pointers to Point Cloud Library and convolution, cross-correlation, template matching.

I've taken the Kinect Depth measurements in meters and converted to centimeters. It seems to work.

So X and Y is +/- from center view of camera.

Z is depth at point X,Y

The problem I had was quick movements when zoomed in.

I solved this with smoothing out the Pitch and Yaw as you zoomed in.

Pitch was more of a problem as I'm calculating everything based on the nose to the center of the face.

If TransformZ < -5 Then ' Less than 5cm

' As you zoom in sensitivity of Yaw and Pitch should be less

' Add more smoohing as Transform Z gets Less

NoseX = (NoseX + (CInt(-TransformZ * 2) * NoseX)) / (CInt(-TransformZ * 2) + 1)

NoseY = (NoseY + (CInt(-TransformZ * 5) * NoseY)) / (CInt(-TransformZ * 5) + 1)

End If

The image I'm working on is a 2D representation of a 3D image. X and Y are like a picture but with Z = depth at X,Y coordinates.

There is a CPU cost calculating X,Y to meters so I try to use infrequently.

DIP.X = X

DIP.Y = Y

DIP.Depth = depth

SP = sensor.CoordinateMapper.MapDepthPointToSkeletonPoint(DepthFormat, DIP) ' coords in Meters

I'll post a video when I've sorted it all out.

edit?usp=sharing

edit?usp=sharing

edit?usp=sharing

edit?usp=sharing

edit?usp=sharing

Share this post


Link to post
Share on other sites

I still don't know what the ideal dimensions for a clip/cap are. FTNoIR has the cap at 60/100 on the side view and 40/40 on the Front view. The 40 seems rather low as most cap designs I've seen have the LEDs at the corners of the peak, which would be more like 100.

Need to know which dimensions to use for the clip as well.

I'm also a bit confused about the FTNoIR Model settings tab. Does it use the last model selected (Clip, Cap or Custom) before closing the box with the X or OK as there doesn't seem to be any other way to specify which you're using?

Share this post


Link to post
Share on other sites

@VeryWooly

I'm sorry to say you're doing it wrong :(

Correlating image and world coordinates isn't an easy thing to do, especially not a linear mapping like the code.

Take a look at OpenCV function solvePnP. It requires a calibration matrix, like most functions of that module.

Do not approach the subject without a solid understanding of linear algebra and trigonometry. You just won't achieve any results.

To calculate a 6DOF model you need minimum 3 points. Even then it's 2 solutions.

If you could find the precise points of some face landmarks, like both eyes and the nose, it becomes trivial to use the P3P (or POSIT) to estimate a rigid transform...

Here's one implementation of P3P:

http://robotics.ethz.ch/~scaramuzza/Davide_Scaramuzza_files/publications/pdf/CVPR11_kneip.pdf

@doveman

The unit used doesn't matter, but will affect translation proportionally. Note that "2.42 centimeters" is a perfectly fine unit!

I'd be most concerned about points not lying in a single line (infinite solutions for one axis), going out of frame, and blending together. Other than that, Patrick's code is numerically robust with regard to coordinates.

Edited by sthalik

Share this post


Link to post
Share on other sites
@doveman

The unit used doesn't matter, but will affect translation proportionally. Note that "2.42 centimeters" is a perfectly fine unit!

I'd be most concerned about points not lying in a single line (infinite solutions for one axis), going out of frame, and blending together. Other than that, Patrick's code is numerically robust with regard to coordinates.

Hmm, what does "affect translation proportionally" mean. If it's going to affect something, then I'd say it does matter! Anyway, I wasn't asking about which units to use but what distances between the points are ideal. Obviously I want to make the clip as compact as possible for aesthetic reasons and for the cap it's preferable to have the lower LEDs at the corners of the peak so that they're not in the users field of view and it would seem to be better to have them on the actual outer edge of the peak, as if they're on top of, or below, the peak then they could be obscured from the cameras view by the peak when pitching the head.

Share this post


Link to post
Share on other sites

@sthalik

The image links below may give a better understanding of what the depth camera sees.

The Kinect is sitting just below the screen pointing up at angle of 8 degrees.

I'm sitting facing the screen the top of my head is about ~900mm away from the Kinect.

The depth data streams in about 30fps as 16bits per pixel instead of say 32bits for colour.

I convert the stream in to X and Y pixel Coords 320 by 240

So I loop from letf to right and Top to Bottom

I exclude all depths above 1200mm

The first Depth I find under 1200mm will be the Top of my head.

From there I can work out the Left and Right edges of my face and approx center of my head.

Then I can work out where the nose is:

Is it closer to the camera? is the depth further away to the left and right of estimated nose point eg Cheeks are further away than tip of nose.

Yaw and Pitch are worked out on relation of nose to FaceCenter X and FaceCenter Y

Transform X, Y are based on my face relative to center of Kinects view in cm. Means I have to get my head in the center for this to work properly early days though.

For Transform Z I take a depth reading and then Z is greater or less than that. As I mover closer it shows a negative value in cm.

Roll I haven't worked on.

After that I send Yaw, Pitch etc via UDP to FaceTrackNoIR.

It's really brought Games like Arma 2 and 3 to life. In Racing games like Dirt 3 I could never go back to a fixed view again.

Hopefully you can access these Images below:

Cheers.

https://docs.google.com/file/d/0B5OZ5Wi1KO5nTFB3TnFxaUk2Ync/edit?usp=sharing

https://docs.google.com/file/d/0B5OZ5Wi1KO5nX2l1SlNwcUdDdGs/edit?usp=sharing

https://docs.google.com/file/d/0B5OZ5Wi1KO5nRERmMXc1eWJSMG8/edit?usp=sharing

https://docs.google.com/file/d/0B5OZ5Wi1KO5nQ042LXVBVTV2SGs/edit?usp=sharing

https://docs.google.com/file/d/0B5OZ5Wi1KO5nSGNONllmRVBrbzQ/edit?usp=sharing

Share this post


Link to post
Share on other sites

@doveman,

If unit is 2 times bigger, translation result will be 2 times bigger, but scales linearly no matter the position.

@VeryWoolly,

1) Rotation uses trigonometric relationships, not linear as your (Visual Basic?) code.

2) Roll won't work unless you get a third point.

3) Do you account for perspective in the angles? The whole 'math' of it is rather shoddy...

Share this post


Link to post
Share on other sites

@sthalik

Yes probably shoddy math and shoddy coding.

3mins

Shoddy flying and shoddy video making.

LOL but it works.

Head Tracking in all it's forms definitely brings games alive.

And VB.net 8% CPU Verbose Mode 5% minimized. vs the C++ example 30% CPU verbose mode using Skeleton data.

Nothing against C++ or VB.net more use to vb though.

It's a proof of concept, work in progress.

Also works in the dark one more small step towards the Holodeck.

It'll be interesting to see how the Oculus VR works out.

http://www.oculusvr.com/

I have limited Roll movement because of a Bike accident.

But it's something that would probably help.

When Rolling head more likely to be using it to look around a corner in a FPS like Arma.

There is a definite angle between the left and right temple and coord at chin level, I'll think about it.

Anyways All the best.

Share this post


Link to post
Share on other sites

Rift support already works in the opentrack fork, but there are some problems with yaw drift which mm0zct hasn't yet fixed due to lack of time :(

If you fix your math to the level it passes review, I'd be glad to get it into opentrack :)

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now

×