Posted by & filed under article.

Every morning I open the Buypass desktop app and enter my personal pin. In return I get a passcode (that I cant copy) so I need to type it into Check Point Endpoint Security in order to logon to the network. Security wise this is great, however too tiresome. Therefore the idea of automating the process came to life! A quick view of the results in action can be viewed in the video below.

Technical approach

By starting the Buypass process from the C# application, the process handle for Buypass is return to the application.

process = Process.Start(processPath);

Proceeding to start the process as defined by the processPath, that when properly configured will point to the Buypass executable file. We have no way of knowing precisely when the Buypass application is ready, however some cleaver guy on stackoverflow checks for a non-zero window handler to confirm the application is fully loaded.

isFormRdy = (process.MainWindowHandle != IntPtr.Zero);

Since we cant know excactly how long time the application will require before fully loaded, a timer is started to poll the process and to check if the window handle is not null with a interval of 500ms. Now knowing when the Buypass application is loaded and ready to receive the pin as input, I use the System.Windows.Forms.SendKeys function to type the personal pin. It’s therefore critical that no other application steels the input focus at this point or else it will for instance write out the personal pin into Word if Word was selected right after Buypass is fully loaded. Followed by the pin, the SendKey function is used to send the enter keypress.


At this point Buypass will generate what I assume to be an image of the passcode. Before proceeding to grab a screenshot of the passcode, I pass the window handle from the process and a referenced variable to the User32 API GetWindowRect.

public static extern bool GetWindowRect(IntPtr hwnd, ref Rect rectangle);

// Get the possition of the form.
IntPtr ptr = process.MainWindowHandle;
Rect pRect = new Rect();
GetWindowRect(ptr, ref pRect);

Now knowing the position of the Buypass application, its possible to target the passcode area with some offsets so that a screenshot is grabbed of the passcode and only the passcode. Reducing the noise drastically compared to running OCR on the whole screen.

Further I assume from the network traffic that Buypass produces, a network call to a server is issued in order to verify the client with the provided pin. Dealing with network we cant be exactly sure when the passcode is available. For instance if there is a network lag, a screen capture might occur before the passcode is available. Since we cant be certain, a recursive retry function is used to retry the screen capture and process OCR up to five times with a delay of one second.

Log("OCR returned: " + word.Text);
if(word.Text.Length >= 5) {
  Log("Password copied to clipboard.");
} else {
  Log("OCR not accepted. Retrying... " + retry);
  CaptureScreen(rect, retry - 1);

To accept the OCR results, it needs to return a string containing at least 5 characters. Finally returning the password to the Clipboard.

Also, if the application is started with the argument -run, it will start the process of opening Buypass and grabbing the passcode, before exiting Buypass and Byepass leaving a clean exit!


The project is available on GitHub:

Please leave a comment if you found this interesting or if you would like to suggest improvements.


Posted by & filed under article.

JavaZone 2015 is just around the corner and here is my picks so far. Some slots are still empty and TBD while other slots are double booked. Might just need to decide a couple of minutes before the talks start and let my guts decide.


Wednesday 9/9

Thursday 10/9

Posted by & filed under article.


Here the other day I tried Chrome without any extensions. A clean Chrome without any customization. It made me realize how important the extensions have become and that the extensions added make the browser. I have listed my installed Chrome extensions bellow hoping other developers will be inspired to get them as well, if not already have them installed.

Leisure Extensions

Development Extensions

Misc Extensions

These extensions fulfill my browser experience. Other suggestions on extensions I should be using is gladly appreciated.


Posted by & filed under article.

Woah!!! I created a inefficient but working 3D printer in Minecraft. The 3D printer is built with the ComputerCraft mod. The turtles within the mod are blocks that looks like small or big computers depending one you Minecraft perspective. The turtles are controlled by LUA scripts with a comprehensive API to control the turtle.

The 3D printing script is hosted on pastebin for your convinience,

To install it to your turtle, type pastebin get 7Uukwe9b printer

The first editor for creating instructions to the 3D printer is located online here,

It its currently limited to a 2D view and by adding layers, you are working your way upwards along the Y axis. Remember to add layer before you try to export. Copy the exported data and create a new paste on After you submitted the data to pastebin, look at the last ending characters in the url. Example:

Going back to the turtle you install the 3D printer as printer.

Type printer <pastebin code>

Example: printer 5uzbb6P0

Turtle slots
Slot 1: Fuel
Slot 2: Resource
Slot 5: Ender chest with fuel
Slot 6: Ender chest with resources


In addition to the 2D view editor. Morgan Sandbæk is working on a 3D editor with import/export capabilities so the two editors can be used in a combined effort. The 3D editor is still in development, however I will leave you with a teaser.

3d editor unity



Posted by & filed under article.

Back in late March Enigma arranged for a Hackaton in the new facilities of NyVekst AS across the road of HiØ Remmen. Every one where encouraged to do something they really wanted to do. If that meant working on a school project, it would be just fine. Many fellow students showed up to pull an extra 8 hour run after ordinary school hours.

2014-03-25 18.06.16-1

I teamed up with two students from computer engineering Jon Kjennbakken and Adrian Edlund. Our project was highly motivated to utilize the Kinect tracker in some way. This spur the idea of making info-screens more interactive. Info-screens usually contains lots of information that are displayed on a screen. The content is swapped within short time cycles to allow more information to be presented. Some times this cycle is to short and your unable to read the whole content within the given time-cycle. Then you are forced to wait for the other content to circle threw, before the entry you where interested in returns.

Not to waste to much time developing a new info-screen, we went with a project I earlier had fiddled with. The project exposes functions in JavaScript for easily navigating back and forth of the content cycle. To handle the Kinect we built a c# application due to easy implementation of the Kinect for Windows SDK. The challenge was primarily presented in how we should interpret the data and how we should interact with the info-screen. It felt reasonable to use the arms to slide between content. Similar to how we slide threw images on a smartphone. Gathering x,y,z coordinates of the hands from the Kinect seemed the way to go. However understanding the users intent was quite hard to determine based on the continuous movement. Lets say we acted on the coordinates of the right arm present. Using the x coordinate we could determine the direction of the hand along a horizontal line. By moving the arm to the right, we could check if last x is less than or greater than the newest x. Also adding a threshold to the minimum required movement of Abs(oldX-newX)>20 so that it would not bogusly react to random movement.

2014-03-25 23.05.55

Thinking we had it all figured out, it was still not working as intended. Even with this approach we got errors on the return path. Thinking we could measure the speed of the hand moving to the right and act on it, would still be implicated by the speed of retracting the arm to the normal position. The final take we ended up doing with this project, was to only interpret the hand direction if it was above the elbow. Playing on the hand resting on the return gesture and falling beneath the elbows y coordinate. While it is far from desirable, it works decent!

We used Socket.IO to facilitate the communication between the C# app and the web app.

Before the evening was over we managed to squeeze in a little easteregg triggered by two persons stretching theirs arms in the air. Ready? GO!

2014-03-25 23.05.21Jon and Adrian playing pong.