Learning Through Tinkering


As presented @ AP Hogeschool IT at Work

All the references I made in the presentation are in this post, as well as links to some code repositories to get you started! If you have any questions, contact me on Twitter, LinkedIn or anywhere else you might find me!

Slides

Slides available as PDF

Content

Game Engine (update/draw-loop)

For the game engine I followed a blog from James Cho. It is an amazing step-by-step guide to build a game engine in Android. It’s quite easy to change that Android implementation into JavaFX! The tutorial is getting quite old but I had no problem following it and reaching my goal.

While his entire website is full of info, I specifically used these pages:

Game Physics

You can only model something correctly if you understand it to the fullest. While I had little understanding of physics there is an amazing resource available for free.

The Nature of Code by Daniel Shiffman is an amazing book which explains both the physics and how to program them in a simple way. I am still baffled by how simple he made it look.

/*
    Abstract class used for most of the items on screen.
    This is what it looks like when only applying chapter 1 and beginning of chapter 2
 */
public abstract class Drawable {
    protected Vector2 location;
    protected Vector2 velocity;
    protected Vector2 acceleration;

    public Drawable(Vector2 location) {
        this.location = location;
        this.velocity = new Vector2(0, 0);
        this.acceleration = new Vector2(0, 0);
    }

    public void applyForce(Vector2 force) {
        acceleration.add(force);
    }

    public void update() {
        velocity.add(acceleration);
        location.add(velocity);

        //Revert to 0
        acceleration.mult(0);
    }
}

The best part is that he published his book in html format with Javascript examples on the internet. You can find it here: https://natureofcode.com/book/

The little Portal based game can be found here.

Attendance List

For the actual recognition, I just use Azure Cognitive Services Face API. It is extremely easy to use. Ask a free Azure trial and get going! :-).

As a buffer between the camera (a stream of images) and the Face API (a single request/response frame) I employed OpenCV. The most important part of the code is the following:

public void run() {
        // Select a Camera
        VideoCapture camera = new VideoCapture(0);

        // Load a Classifier (pre-trained model packaged with OpenCV)
        CascadeClassifier faceDetector =
                ClassifierLoader.load("/lbpcascade_frontalface.xml");

        Mat frame = new Mat();
        // LOOP FOREVER!!!
        while (true) {
            if (camera.read(frame)) {

                // Run a detection on the frame
                MatOfRect faceDetections = new MatOfRect();
                faceDetector.detectMultiScale(frame, faceDetections);
                Rect[] detectedFaces = faceDetections.toArray();

                // Draw rectangles on the original frame
                drawRectangles(detectedFaces, frame);

                // only send the frame to Cognitive Services if you actually detected a face
                if(detectedFaces.length > 0) {
                    cognitiveServices.detectFaces(frame);
                }

                // Render the edited frame in a simple application
                render(frame);
            }
        }
    }

The Github code example is here. I still need to clean the Repo a bit.

Lessons Learned from Pokemon Go

  • They do not use JSON to communicate, they use Google Protocol Buffers An example project with Protocol Buffers Maven Plugin can be found here

  • They use S2 Geometry which is a really cool way to index the earth!

Interacting with the world

I wrote a blog about creating the “plushy controlled game”, going into the details of the solution: