I'm Jimmy Hansson


I'm a technical designer with an interest in developing games and systems with the user experience in focus. I have ~10 years of professional experience as a software designer and programmer.

Unity UE4 C# Blueprint UX


Movement and Interaction System

What it is

A game called Echo which focused on environmental storytelling made in Unreal Engine 4.

What I did

I was mainly responsible for the character movement and interactions. The movement is a custom made physics based system that allows for more realistic and predictable interactions with the environment than with the default Unreal character.

The goal for interacting with objects was to get a fairly realistic movement where it look and feel like you as a player are dragging around a physical object.

Physics constraints are used to attach the object to the player and IK is then used on the hands and arms to position them on the object's surface. Raycasting is used to get locations.

Part of Blueprint code to grab an object:

Blueprint for IK on arms when grabbing an object:

What I learned

Since there only was two main actions (jumping and dragging) I should have prioritized to make sure they felt really good for the player to execute. Jumping should feel more fluid when used from a running state. Dragging objects around should be stabilized by using two physical constraints, simulating the attachments of both hands of the player.

I learned that by creating your own movement system you gain a lot of flexibility, but it also takes more time than I expected. There are a lot of small things that makes up a solid movement system, like keeping track of all possible edge cases and states the character can be in.

Nimble Hands - Physics Based Thief Game

What it is

Nimble Hands is a game where you as a thief has to navigate tight spaces with your ever growing physics based sack of loot.

What I did

A ten page Game Design Document that describes story, gameplay, unique mechanics, player character, game world, game experience, game mechanics, enemies and challenges, level design, art style, music, sound, target audience and monitization.

I also created the prototypes in the video to showcase the fun factor with a physics based sack. The first prototype was made in Unreal Engine and the more finished version was made in Unity.

Enemy AI with GOAP

What it is

FPS arena shooter with AI made with Goal Oriented Action Planning. 10 day project made in Unity and C#.

What I did

Created a FPS controller and a procedural generated map for fast paced action. The AI is made with ReGoap and inspired by the AI in Fear.

Example code to look for cover positions to add to memory of agent:

Show/Hide Code

// Sensor for remembering reachable cover positions
// Find "edges" in navmesh and see if they can cover you in the player direction
if (NavMesh.FindClosestEdge(transform.position, out defensiveEdgeHit, NavMesh.AllAreas))
    for (int i = 0; i < numCoverPositions; i++)
        RaycastHit hit;
        Vector2 randomPosCircle = (Random.insideUnitCircle * searchArea);
        Vector3 randomPos = new Vector3(randomPosCircle.x, groundLevel, randomPosCircle.y);
        NavMeshHit navHit;

        if (NavMesh.SamplePosition(defensiveEdgeHit.position + (-directionToPlayer) + randomPos, out navHit, searchRadius, NavMesh.AllAreas))
            var direction = pc.transform.position - navHit.position / heading.magnitude;

            if (Physics.Raycast(navHit.position + new Vector3(0, 0.5f, 0), direction, out hit, 40f))
                // Only ArenaBlocks can be between player and AI
                if (hit.collider.CompareTag("ArenaBlock"))
                    memory.GetWorldState().Set(SENSOR_TYPE.DEFENSIVE_COVER_POSITION, defensiveCoverPosition);

Example code for go to cover action:

Show/Hide Code

protected override void Awake()
  // Set required preconditions and effects if succesful action
  preconditions.Set(SENSOR_TYPE.LAST_POSITION_KNOWN, true);
  preconditions.Set(SENSOR_TYPE.CAN_NOT_SEE_TARGET, true);
  effects.Set(SENSOR_TYPE.IS_IN_COVER, true);
public void Update(...)
  // LAST_SEEN_POSITION has been updated while moving to cover: exit out this action
  if (lastTargetLocation != (Vector3)agent.GetMemory().GetWorldState().Get(SENSOR_TYPE.LAST_SEEN_POSITION))

  // Has a cover position to move to
  if (coverPos != Vector3.zero)
      // Cover has been reached
      if (Vector3.Distance(transform.position, coverPos) < maxDistanceInCover)
          agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.MOVING_TOWARDS_COVER, false);
          agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.IS_IN_COVER, true);

          // Been in cover long enough
          if (coverTimerStarted && Time.time > timeToCover)
              agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.IS_IN_COVER, false);
              coverTimerStarted = false;

          // Exit cover if target is aiming at me
          if (coverTimerStarted && (bool)agent.GetMemory().GetWorldState().Get(SENSOR_TYPE.TARGET_IS_AIMING_AT_ME))

What I learned

I really enjoyed working with GOAP. It gives a flexibility to the AI which is harder to reach with Behaviour Trees or similar AI solutions.

I still think there is too much "hard coding" for me in how the actions are being performed and that preconditions and effects are prone to errors. I think the philosophy behind GOAP but with Machine Learning will solve a lot of the issues.

Machine Learning: Golf

What it is

AI that has learned to play simple golf by using Machine Learning. Result is from around 30 minutes of training only.

What I did

I used Unity and ml-agents to train the AI. I made a training environment that used curriculum training to teach the AI in steps. First it had the green (goal) close by in the same position making it easy to accidentally hit it. When the AI had high enough success rate, the next stage started where the green was randomized in position. This way it could realize that it was not the world position that was important, but the actual position of the green in the world.

After several different lessons that gradually made things harder and introduced more challenges, the AI could play randomly generated maps with high success rate even when I added randomness in how clean the ball was hit.

Code for the AI observations:

Show/Hide Code

public override void CollectObservations()
    float rayDistance = 5f;
    float[] rayAngles = { 0f, 45f, 90f, 135f, 180f, 225f, 270f, 315f };
    float[] rayAnglesOffset = { 0f + 22.5f, 45f + 22.5f, 90f + 22.5f, 135f + 22.5f, 180f + 22.5f, 225f + 22.5f, 270f + 22.5f, 315f + 22.5f };
    string[] detectableObjects;
    detectableObjects = new string[] { "ground", "goal", "fairway" };

    // Raytracing environment in different angles
    AddVectorObs(rayGridPer.Perceive(1f, rayAngles, detectableObjects, 0f, 7f));
    AddVectorObs(rayGridPer.Perceive(2f, rayAnglesOffset, detectableObjects, 0f, 7f));
    AddVectorObs(rayGridPer.Perceive(3f, rayAngles, detectableObjects, 0f, 7f));
    AddVectorObs(rayGridPer.Perceive(6f, rayAnglesOffset, detectableObjects, 0f, 7f));
    AddVectorObs(rayGridPer.Perceive(10f, rayAngles, detectableObjects, 0f, 7f));

    // Observe relations in distance between different objects
    AddVectorObs(target.transform.position - goal.transform.position);
    AddVectorObs(goal.transform.position - area.transform.position);
    AddVectorObs(Vector3.Distance(target.transform.position, goal.transform.position));
    AddVectorObs(maxTargetDistance - targetDistance);


    // Observe if the predicted trajectory hits something
    aimingAtObstacle = 0f;
    for (int i = 0; i < launchArcRenderer.lr.positionCount; i++)
        float hitSomething = 0.0f;
        Collider[] hits = Physics.OverlapSphere(launchArcRenderer.lr.GetPosition(i), 0.3f);
        if (hits.Length > 0)
            if (hits[0].CompareTag("obstacle"))
                hitSomething = 1.0f;
                aimingAtObstacle = 1.0f;


What I learned

It's easy for the AI to come to the wrong conclusions about what the different observations actually means. A lot of trial and error was needed to find a balance between the observations. It's easy to "overload" the AI.

Curriculum learning where the AI goes through serveral lessons that gets gradually harder is very effective to reach the end goal in much less time than without it if the challenge is fairly hard.

Realistic ball physics for sports

What it is

I like simulations. I like golf. I like football. So I wanted to create ball physics that tries to simulate realistic flight trajectory. Made in Unity and C#.

What I did

I read many papers about ball flight and tried to integrate them to work together with Unity's own physics. The physics of the ball flight uses things like temperature, air density, Reynolds Number, bunch of different coefficients and frictions to calculate magnus force and spin.

Each golf club also has its own mass, loft and ECOR value that has different impact on the ball. Depending on the club swing angle, spin is added to the ball and can result in slicing and hooking the ball.

One part of calculating ball flight:

Show/Hide Code

Vector3 DragAndMagnusForce() {
  // Compute the apparent velocity magnitude.
  velocityMagnitude = rigidBody.velocity.magnitude + 0.0000001f;

  // Compute the total drag force and the dirctional drag.
  totalDragForce = 0.5f * density * area * cd * (velocityMagnitude * velocityMagnitude);
  directionalDrag = -totalDragForce * rigidBody.velocity / (velocityMagnitude);

  //  Evaluate the Magnus force terms.
  rotationSpinRatio = Mathf.Abs(radius * (rigidBody.angularVelocity.magnitude) / (velocityMagnitude));
  liftCoefficient = -0.05f + Mathf.Sqrt(0.0025f + 0.36f * (rotationSpinRatio));
  magnusForceCoefficient = 0.5f * density * area * liftCoefficient * (velocityMagnitude * velocityMagnitude);     
  magnusForce = magnusForceCoefficient * Vector3.Cross(rigidBody.angularVelocity, rigidBody.velocity).normalized;

  // Return directional drag and magnus force       
  return new Vector3(
      magnusForce.x + directionalDrag.x,
      magnusForce.y + directionalDrag.y,
      magnusForce.z + directionalDrag.z

Code to calculate force from golf club:

Show/Hide Code

public void CalculateClubCollision(GolfBall golfBall)
    Rigidbody golfBallRb = golfBall.GetComponent();
    // Convert the loft angle from degrees to radians and
    // assign values to some convenience variables.
    float loftRadians = (loft + ballHitAngle) * Mathf.PI / 180.0f;
    float clubFaceAngleRadians = clubFaceAngle * Mathf.PI / 180.0f;
    float swingAngleRadians = swingAngle * Mathf.PI / 180.0f;
    float ballAngle = (loft - ballHitAngle) * Mathf.PI / 180.0f;
    float cosL = Mathf.Cos(loftRadians);
    float sinL = Mathf.Sin(loftRadians);
    float sinL2 = Mathf.Sin(ballAngle);

    //  Calculate the pre-collision velocities normal and parallel to the line of action.
    float velocityClubParallel = cosL * shotPower;
    float velocityClubNormal = -sinL * shotPower;
    float vcn2 = -sinL2 * shotPower;
    float vcnAngle = Mathf.Sin(clubFaceAngleRadians + swingAngleRadians) * shotPower;

    //  Compute the post-collision velocity of the ball along the line of action.
    float velocityBallParallel = (1.0f + ecor) * clubMass * velocityClubParallel / (clubMass + golfBall.GetComponent().mass);

    //  Compute the post-collision velocity of the ball perpendicular to the line of action.
    float velocityBallNormal = (5.0f / 7.0f) * clubMass * velocityClubNormal / (clubMass + golfBall.GetComponent().mass);

    // Spin
    float omega = (5.0f / 7.0f) * velocityClubNormal / (golfBall.radius);
    float omegaSide = (5.0f / 7.0f) * vcnAngle / (golfBall.radius);

    //  Rotate post-collision ball velocities back into standard Cartesian frame of reference.
    float vx0 = 0f;
    float vy0 = (sinL * velocityBallParallel) + (cosL * velocityBallNormal);
    float vz0 = (cosL * velocityBallParallel) - (sinL * velocityBallNormal);

    // Set the direction of the ball depending on aiming direction and swing angle.
    Vector3 directionToAim = -transform.right + (rb.velocity.normalized * 0.15f);
    directionToAim.y = 0;
    directionToAim = Quaternion.AngleAxis(swingAngle, Vector3.up) * directionToAim;
    float angle = Vector3.Angle(directionToAim, Vector3.forward) + swingAngle;
    directionToAim = directionToAim.normalized;
    angle = (directionToAim.x < 0) ? -angle : angle;

    // Set velocity on golf ball
    golfBallRb.velocity = new Vector3(vz0 * directionToAim.x, vy0, vz0 * directionToAim.z);

    // Set the spin of the ball that is generated by the club impact.
    golfBallRb.angularVelocity = new Vector3(
        omega * Mathf.Cos(angle * Mathf.PI / 180.0f),
        omega * -Mathf.Sin(angle * Mathf.PI / 180.0f)

Climbing System in Unreal Engine 4

What it is

Inspired by games like Shadow of the Colossus and The Legend of Zelda: Breath of the Wild I've created a Blueprint based climbing system in Unreal Engine 4. Being able to climb freely gives the player a lot of interesting ways to explore the world.

What I did

A system that lets the player climb freely on any surface in the world. The surfaces can be static or moving. The climbing is activated when moving into a climbable object within a certain angle threshold between the climbable normal and the way the player looks. Raycasting is then used to check if it's possible to climb in the input direction. If possible, a helper will be created at the local position of the raycast hit in the object being climbed upon. This way when we lerp to this position, it doesn't matter if the object moves, we always have the local position stored.

Main blueprint in climbing component:

Show/Hide Code

What I learned

Creating more advanced features in Blueprint is not easy and clearly not what it is designed for. Many of the complicated blueprints could have been written in a few lines of C#.

Still, Blueprints are really powerful and if more advanced features needs to be done in Blueprint it CAN be done.

Gameplay Scripter in "Throw Motion"

What it is

It was developed in four weeks by a team of game designers and 3d artists. It's a 1-4 player party game where the two sports of dodgeball and figure skating has been mashed together.

What I did

I was mainly responsible for the character movement, game rules and camera functionalities.

The aim for the movement was to find a good balance between sliding on the ice but still have responsive controls. Character movement is based on the Unreal character movement with modifications to make it feel like you're sliding around on ice.

For the camera I set up blueprint to make the camera zoom in and out depending on where the players are, making sure they are all in frame of the camera. This way the camera gets closer to the action when all players are close to each other.

What I learned

In intensive party games, it's better to go for more arcade feeling in the controls. The user experience was affected negatively by having the players slide to much on the ice. By making the controls more responsive the players would have felt more in control of their characters.

I also learned that you can make a pretty good prototype for a game in two weeks with good communication in the team.

Equipment and Inventory System

What it is

Standard type of equipment and inventory system created in Unreal Engine 4 with Blueprints.

What I did

A GUI to navigate and use items and an inventory component that handles the actual inventory. The inventory has categories and different ruleset for each category. Some items can stack while some are unique. You can use, drop, equip and unequip items depending on what type of item it is.

Part of blueprint to add items to inventory: