VeryWoolly
Member-
Content Count
10 -
Joined
-
Last visited
-
Medals
Community Reputation
10 GoodAbout VeryWoolly
-
Rank
Private
-
I just did a test of the funnel maneuver in OA and it was more (how do I say this?) fluid. I found it easier to stay on target.
-
I did learn from your tutorial for funnel maneuver. The main part is stopping to a hover before commencing sideways and then pointing down. Eg. your trying to move forward but always slipping to the side. I use your mission for testing Kinect Head Tracking using Depth Sensor. For funnel maneuver tutorial. It's all good but It's one of those things you have to see to believe. Once you've done it you can't believe other people can't do it. Bit like doing tricks skateboarding. A video with narration would be the ticket. I got the Helo Maint sign worked out, that's how I saw the guy kick the skid makes me laugh. This mission is the mutts nutts.
-
Brilliant Mission, getting a rearm and the guy on the right gives the skid a kick, awesome. Funnelling makes a huge difference. Many thanks.
-
@sthalik Yes probably shoddy math and shoddy coding. 3mins Shoddy flying and shoddy video making. LOL but it works. Head Tracking in all it's forms definitely brings games alive. And VB.net 8% CPU Verbose Mode 5% minimized. vs the C++ example 30% CPU verbose mode using Skeleton data. Nothing against C++ or VB.net more use to vb though. It's a proof of concept, work in progress. Also works in the dark one more small step towards the Holodeck. It'll be interesting to see how the Oculus VR works out. http://www.oculusvr.com/ I have limited Roll movement because of a Bike accident. But it's something that would probably help. When Rolling head more likely to be using it to look around a corner in a FPS like Arma. There is a definite angle between the left and right temple and coord at chin level, I'll think about it. Anyways All the best.
-
@sthalik The image links below may give a better understanding of what the depth camera sees. The Kinect is sitting just below the screen pointing up at angle of 8 degrees. I'm sitting facing the screen the top of my head is about ~900mm away from the Kinect. The depth data streams in about 30fps as 16bits per pixel instead of say 32bits for colour. I convert the stream in to X and Y pixel Coords 320 by 240 So I loop from letf to right and Top to Bottom I exclude all depths above 1200mm The first Depth I find under 1200mm will be the Top of my head. From there I can work out the Left and Right edges of my face and approx center of my head. Then I can work out where the nose is: Is it closer to the camera? is the depth further away to the left and right of estimated nose point eg Cheeks are further away than tip of nose. Yaw and Pitch are worked out on relation of nose to FaceCenter X and FaceCenter Y Transform X, Y are based on my face relative to center of Kinects view in cm. Means I have to get my head in the center for this to work properly early days though. For Transform Z I take a depth reading and then Z is greater or less than that. As I mover closer it shows a negative value in cm. Roll I haven't worked on. After that I send Yaw, Pitch etc via UDP to FaceTrackNoIR. It's really brought Games like Arma 2 and 3 to life. In Racing games like Dirt 3 I could never go back to a fixed view again. Hopefully you can access these Images below: Cheers. https://docs.google.com/file/d/0B5OZ5Wi1KO5nTFB3TnFxaUk2Ync/edit?usp=sharing https://docs.google.com/file/d/0B5OZ5Wi1KO5nX2l1SlNwcUdDdGs/edit?usp=sharing https://docs.google.com/file/d/0B5OZ5Wi1KO5nRERmMXc1eWJSMG8/edit?usp=sharing https://docs.google.com/file/d/0B5OZ5Wi1KO5nQ042LXVBVTV2SGs/edit?usp=sharing https://docs.google.com/file/d/0B5OZ5Wi1KO5nSGNONllmRVBrbzQ/edit?usp=sharing
-
@sthalik Thanks for the pointers to Point Cloud Library and convolution, cross-correlation, template matching. I've taken the Kinect Depth measurements in meters and converted to centimeters. It seems to work. So X and Y is +/- from center view of camera. Z is depth at point X,Y The problem I had was quick movements when zoomed in. I solved this with smoothing out the Pitch and Yaw as you zoomed in. Pitch was more of a problem as I'm calculating everything based on the nose to the center of the face. If TransformZ < -5 Then ' Less than 5cm ' As you zoom in sensitivity of Yaw and Pitch should be less ' Add more smoohing as Transform Z gets Less NoseX = (NoseX + (CInt(-TransformZ * 2) * NoseX)) / (CInt(-TransformZ * 2) + 1) NoseY = (NoseY + (CInt(-TransformZ * 5) * NoseY)) / (CInt(-TransformZ * 5) + 1) End If The image I'm working on is a 2D representation of a 3D image. X and Y are like a picture but with Z = depth at X,Y coordinates. There is a CPU cost calculating X,Y to meters so I try to use infrequently. DIP.X = X DIP.Y = Y DIP.Depth = depth SP = sensor.CoordinateMapper.MapDepthPointToSkeletonPoint(DepthFormat, DIP) ' coords in Meters I'll post a video when I've sorted it all out.
-
@sthalik Using the Kinect 1.7 SDK I've managed to get Yaw, Pitch and Roll, X, Y Z and sending it to FaceTrackNoIR via UDP using IR depth. The limitation is your face needs to be more than 800mm from the Xbox 360 Kinect and it takes 30% cpu on a i7-920. So I've rolled my own: - CPU 7% Verbose mode - CPU 5% minimized - Only Pitch and Yaw at the moment - Face can be as close as 500mm from Kinect - Uses raw depth data I've got X, Y, Z data in Meters off a central point. eg X= -0.0112, Y=0.00453, Z=0.8765 What type of data does FaceTrackNoIR expect for X, Y, Z Mostly interested in Z for zooming in and out for Arma 2. Any help appreciated. Cheers.
-
You'll need a Kinect, FaceTrackNoIR 1.7 and Kinect Drivers 1.7 Please note: - This uses IR for depth so can be used in the dark - Direct sun On your face will cause it to loose tracking. Kinect Drivers and SDK http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx Set Source in FaceTrackNoIR to UDP port 5550 Link to EXE https://docs.google.com/file/d/0B5OZ5Wi1KO5nRnlXRDFmOVBaQ2M/edit?usp=sharing The default settings are: (SingleFace.ini) [serverSettings] SERVER=127.0.0.1 PORT=5550 Link for Kinect UDP C++ Source (SingleFace Project) https://docs.google.com/file/d/0B5OZ5Wi1KO5nUVZqUVczM0VhVk0/edit?usp=sharing Enjoy :)
-
@doveman Maybe these links will help? http://www.free-track.net/english/hardware/point_model.php http://www.free-track.net/english/hardware/point_model_gallery.php http://www.free-track.net/english/hardware/calcled/
-
Hi All Big Thanks to FaceTrackNOIR, Freetrack and Carl Kenner. Below is the C++ code for a Kinect (Xbox 360) sensor to communicate with FaceTrackNOIR via UDP. At the bottom is C# code using the FaceTrackingBasics-WPF example - it's slower. Enjoy:) Please Note: I'll post a link to the source code and the exe soon. On I7 920 this code takes 30% CPU. Will not work well if your in direct sunlight. Tested on Windows 7 64bit only. I'm not a C++ coder it was the most efficient example I found out of the SDK for face tracking. I'm working on a one pass per frame HeadPose, Currently 6% CPU will post when done. You'll need: FaceTrackNoIR XBox (needs special power adapter) or Windows Kinect Visual Studio Express Kinect Drivers Kinect SDK 1.7 Once the drivers are installed use the sdk to play/test the setup. Install the C++ Face Tracking Visulization from the Kinect Toolkit Open it up in Visual Studio Add Lib for Sockets: Right Click SingleFace properties. (this is the project to work on) Select Linker/Input Click Additional Dependencies then click on Edit Add Ws2_32.lib under Kinect10.lib Setup Headers for sockets: Find stdafx.h under Headers Double Click to open // Windows Header Files: #include <winsock2.h> <=== Add this line #include <windows.h> #include <Shellapi.h> Open SingleFace.cpp: Find: void SingleFace::FTHelperCallingBack(PVOID pVoid) Goto end of sub eg <<<== Insert Code Here } } } Insert the code below above the 3 curly brackets: Please Note these two lines: si_other.sin_port = htons(5550); // This is the Port FaceTrackNOIR is expecting Data From si_other.sin_addr.S_un.S_addr = inet_addr("127.0.0.1"); // This is local host eg your PC // SEND UDP Data thru Socket to FaceTrackNOIR struct sockaddr_in si_other; int s, slen=sizeof(si_other); WSADATA wsa; //Initialise winsock printf("\nInitialising Winsock..."); if (WSAStartup(MAKEWORD(2,2),&wsa) != 0) { printf("Failed. Error Code : %d",WSAGetLastError()); exit(EXIT_FAILURE); } printf("Initialised.\n"); //create socket if ( (s=socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == SOCKET_ERROR) { printf("socket() failed with error code : %d" , WSAGetLastError()); exit(EXIT_FAILURE); } //setup address structure memset((char *) &si_other, 0, sizeof(si_other)); si_other.sin_family = AF_INET; si_other.sin_port = htons(5550); si_other.sin_addr.S_un.S_addr = inet_addr("127.0.0.1"); double test_data[6]; //Translation XYZ test_data[0] = (double) translationXYZ[0]; // Yaw test_data[1] = (double) translationXYZ[1]; // Yaw test_data[2] = (double) translationXYZ[2]; // Yaw //Rotation test_data[3] = (double) rotationXYZ[1]; // Yaw test_data[4] = (double) rotationXYZ[0]; // Pitch test_data[5] = (double) rotationXYZ[2]; // Roll int err_send; //send the message err_send = sendto(s, (const char *) test_data, sizeof( test_data ) , 0 , (struct sockaddr *) &si_other, slen); if (err_send == SOCKET_ERROR) { printf("sendto() failed with error code : %d" , WSAGetLastError()); exit(EXIT_FAILURE); } closesocket(s); WSACleanup(); Press F5 to run the application Start FaceTrackNOIR Change the Source to: FaceTrackNOIR UDP Port: 5550 under settings Any Network Windows popups click allow for private network. Problems: Test Kinect Samples work ok. Test FaceTrackNoIR UDP works ok with a laptop with camera. Sit down in the sun for awhile and soak up the rays. C# Code using the FaceTrackingBasics-WPF example: using System.Net; using System.Net.Sockets; using System.Text; internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage, Skeleton skeletonOfInterest) Socket sending_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram,ProtocolType.Udp); IPAddress send_to_address = IPAddress.Parse("127.0.0.1"); IPEndPoint sending_end_point = new IPEndPoint(send_to_address, 5550); double[] Rots = { 0, 0, 0, frame.Rotation.Y, frame.Rotation.X, frame.Rotation.Z }; int doubleSize = sizeof(double); byte[] send_buffer = new byte[6 * doubleSize + 4]; for (int i = 0; i < 6; ++i) { byte[] converted = BitConverter.GetBytes(Rots); //if (BitConverter.IsLittleEndian) //{ //Array.Reverse(converted); //} for (int j = 0; j < doubleSize; ++j) { send_buffer[i * doubleSize + j] = converted[j]; } } //Filler Terminating characters? send_buffer[48] = 0; send_buffer[49] = 0; send_buffer[50] = 0; send_buffer[51] = 0; sending_socket.SendTo(send_buffer, sending_end_point);