Hello!

I'm Jimmy Hansson

Technical Game Designer | Gameplay Programmer | AI Designer & Programmer | Avid Golfer

I'm a technical game designer with an interest in developing games and systems with user experience in focus. I'm currently working on Goat Simulator 3 @ Coffee Stain North.

Unreal Engine C++ Blueprint Unity C#
LinkedIn Profile.

My Work.

Goat Simulator 3

Technical Game Designer

I am a Technical Game Designer at Coffee Stain North, actively involved in the continous development of Goat Simulator 3.

Some of the things I do at work:

  • Brainstorm new ideas
  • Prototype events, gears and interactables
  • Take prototypes to final versions
  • Play table tennis
  • Fix some bugs, leave some for goatyness

Special areas of responsibility include:

  • Setup for new playable characters
  • Physics Assets for Characters, Gears and objects
  • Tools development for designers

Utility AI

Utility AI extension

An extension to Unreal Engine's Behavior Tree that adds Utility Selector and Action Nodes with considerations.

I wanted a more reactive and flexible AI so I created a system where the AI can select an action (task) based on how useful it is in the current moment. Utility AI is well suited for this as each action has considerations attached to it that returns a score based on the current world status.

By extending the Behavior Tree we can still use the strengths of behavior trees for certain situations where we need a more controlled behavior. In my system each action takes advantage of the new powerful StateTree asset and has all its behavior contained in the StateTree.

All navigation for the agent is handled in its own seperate StateTree and lives outside the Behavior Tree. This allows actions to be layered on top of movement if needed. An example would be a "Fire weapon" action that can let the agent keep moving towards the target set by an earlier action. An action can send a "stop" event to the movement StateTree to make the player pause movement or set it's own movement location.

Since all actions are state trees, they can gracefully handle interruption on their Exit States. When the utility system finds a higher scoring action, it can tell the current running action's StateTree to abort. When the state tree has handled its exit states, the utility selector choose the new action.

Movement and Interaction

What it is

A game called Echo which focused on environmental storytelling made in Unreal Engine 4.

What I did

I was mainly responsible for the character movement and interactions. The movement is a custom made physics based system that allows for more realistic and predictable interactions with the environment than with the default Unreal character.

The goal for interacting with objects was to get a fairly realistic movement where it look and feel like you as a player are dragging around a physical object.

Physics constraints are used to attach the object to the player and IK is then used on the hands and arms to position them on the object's surface. Raycasting is used to get locations.

Part of Blueprint code to grab an object:

Blueprint for IK on arms when grabbing an object:

Other projects.

Nimble Hands

What it is

Nimble Hands is a game where you as a thief has to navigate tight spaces with your ever growing physics based sack of loot.

What I did

A ten page Game Design Document that describes story, gameplay, unique mechanics, player character, game world, game experience, game mechanics, enemies and challenges, level design, art style, music, sound, target audience and monitization.

I also created the prototypes in the video to showcase the fun factor with a physics based sack. The first prototype was made in Unreal Engine and the more finished version was made in Unity.

Enemy AI with GOAP

What it is

FPS arena shooter with AI made with Goal Oriented Action Planning. 10 day project made in Unity and C#.

What I did

Created a FPS controller and a procedural generated map for fast paced action. The AI is made with ReGoap and inspired by the AI in Fear.

Example code to look for cover positions to add to memory of agent:

Show/Hide Code

                                          
...
// Sensor for remembering reachable cover positions
// Find "edges" in navmesh and see if they can cover you in the player direction
if (NavMesh.FindClosestEdge(transform.position, out defensiveEdgeHit, NavMesh.AllAreas))
{
    for (int i = 0; i < numCoverPositions; i++)
    {
        RaycastHit hit;
        Vector2 randomPosCircle = (Random.insideUnitCircle * searchArea);
        Vector3 randomPos = new Vector3(randomPosCircle.x, groundLevel, randomPosCircle.y);
        NavMeshHit navHit;

        if (NavMesh.SamplePosition(defensiveEdgeHit.position + (-directionToPlayer) + randomPos, out navHit, searchRadius, NavMesh.AllAreas))
        {
            var direction = pc.transform.position - navHit.position / heading.magnitude;

            if (Physics.Raycast(navHit.position + new Vector3(0, 0.5f, 0), direction, out hit, 40f))
            {
                // Only ArenaBlocks can be between player and AI
                if (hit.collider.CompareTag("ArenaBlock"))
                {
                    memory.GetWorldState().Set(SENSOR_TYPE.DEFENSIVE_COVER_POSITION, defensiveCoverPosition);
                    break;
                }
            }
        }
    }
}
...
                                          
                                        

Example code for go to cover action:

Show/Hide Code

                                            
...
protected override void Awake()
{
  ...
  // Set required preconditions and effects if succesful action
  preconditions.Set(SENSOR_TYPE.LAST_POSITION_KNOWN, true);
  preconditions.Set(SENSOR_TYPE.CAN_NOT_SEE_TARGET, true);
  preconditions.Set(SENSOR_TYPE.NOT_CLOSE_LAST_SEEN_POSITION, true);
  effects.Set(SENSOR_TYPE.IS_IN_COVER, true);
  effects.Set(SENSOR_TYPE.TARGET_IS_NOT_AIMING_AT_ME, true);
}
...
public void Update(...)
{
  // LAST_SEEN_POSITION has been updated while moving to cover: exit out this action
  if (lastTargetLocation != (Vector3)agent.GetMemory().GetWorldState().Get(SENSOR_TYPE.LAST_SEEN_POSITION))
  {
      failCallback(this);
  }

  // Has a cover position to move to
  if (coverPos != Vector3.zero)
  {
      // Cover has been reached
      if (Vector3.Distance(transform.position, coverPos) < maxDistanceInCover)
      {
          agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.MOVING_TOWARDS_COVER, false);
          agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.IS_IN_COVER, true);

          ...
          
          // Been in cover long enough
          if (coverTimerStarted && Time.time > timeToCover)
          {
              agent.GetMemory().GetWorldState().Set(SENSOR_TYPE.IS_IN_COVER, false);
              coverTimerStarted = false;
              doneCallback(this);
          }                

          // Exit cover if target is aiming at me
          if (coverTimerStarted && (bool)agent.GetMemory().GetWorldState().Get(SENSOR_TYPE.TARGET_IS_AIMING_AT_ME))
          {
              failCallback(this);
          }
      }
  }
  ...
}
...
                                            
                                          

What I learned

I really enjoyed working with GOAP. It gives a flexibility to the AI which is harder to reach with Behaviour Trees or State Machines.

The main downside with GOAP is probably the performance issues with a planner. The question is how important it really is to plan many steps ahead for an AI. A well designed Utility AI could probably create similar results but without the Planner overhead.

Climbing System

What it is

Inspired by games like Shadow of the Colossus and The Legend of Zelda: Breath of the Wild I've created a Blueprint based climbing system in Unreal Engine 4. Being able to climb freely gives the player a lot of interesting ways to explore the world.

What I did

A system that lets the player climb freely on any surface in the world. The surfaces can be static or moving. The climbing is activated when moving into a climbable object within a certain angle threshold between the climbable normal and the way the player looks. Raycasting is then used to check if it's possible to climb in the input direction. If possible, a helper will be created at the local position of the raycast hit in the object being climbed upon. This way when we lerp to this position, it doesn't matter if the object moves, we always have the local position stored.

Main blueprint in climbing component:

Show/Hide Code

What I learned

Creating more advanced features in Blueprint is not easy and clearly not what it is designed for. Many of the complicated blueprints could have been written in a few lines of C#.

I have gained a lot of knowledge of Blueprint and can appreciate the power of visual scripting.

Inventory System

What it is

Standard type of equipment and inventory system created in Unreal Engine 4 with Blueprints.

What I did

A GUI to navigate and use items and an inventory component that handles the actual inventory. The inventory has categories and different ruleset for each category. Some items can stack while some are unique. You can use, drop, equip and unequip items depending on what type of item it is.

Part of blueprint to add items to inventory:

Machine Learning: Golf

What it is

AI that has learned to play simple golf by using Machine Learning. Result is from around 30 minutes of training only.

What I did

I used Unity and ml-agents to train the AI. I made a training environment that used curriculum training to teach the AI in steps. First it had the green (goal) close by in the same position making it easy to accidentally hit it. When the AI had high enough success rate, the next stage started where the green was randomized in position. This way it could realize that it was not the world position that was important, but the actual position of the green in the world.

After several different lessons that gradually made things harder and introduced more challenges, the AI could play randomly generated maps with high success rate even when I added randomness in how clean the ball was hit.

Code for the AI observations:

Show/Hide Code

                                    
public override void CollectObservations()
{
float rayDistance = 5f;
float[] rayAngles = { 0f, 45f, 90f, 135f, 180f, 225f, 270f, 315f };
float[] rayAnglesOffset = { 0f + 22.5f, 45f + 22.5f, 90f + 22.5f, 135f + 22.5f, 180f + 22.5f, 225f + 22.5f, 270f + 22.5f, 315f + 22.5f };
string[] detectableObjects;
detectableObjects = new string[] { "ground", "goal", "fairway" };

// Raytracing environment in different angles
AddVectorObs(rayGridPer.Perceive(1f, rayAngles, detectableObjects, 0f, 7f));
AddVectorObs(rayGridPer.Perceive(2f, rayAnglesOffset, detectableObjects, 0f, 7f));
AddVectorObs(rayGridPer.Perceive(3f, rayAngles, detectableObjects, 0f, 7f));
AddVectorObs(rayGridPer.Perceive(6f, rayAnglesOffset, detectableObjects, 0f, 7f));
AddVectorObs(rayGridPer.Perceive(10f, rayAngles, detectableObjects, 0f, 7f));

// Observe relations in distance between different objects
AddVectorObs(target.transform.position - goal.transform.position);
AddVectorObs(goal.transform.position - area.transform.position);
AddVectorObs(Vector3.Distance(target.transform.position, goal.transform.position));
AddVectorObs(maxTargetDistance - targetDistance);

AddVectorObs(shotCount);
AddVectorObs(decisionCounter);

// Observe if the predicted trajectory hits something
aimingAtObstacle = 0f;
for (int i = 0; i < launchArcRenderer.lr.positionCount; i++)
{
    float hitSomething = 0.0f;
    Collider[] hits = Physics.OverlapSphere(launchArcRenderer.lr.GetPosition(i), 0.3f);
    if (hits.Length > 0)
    {
        if (hits[0].CompareTag("obstacle"))
        {
            hitSomething = 1.0f;
            aimingAtObstacle = 1.0f;
        }                
    }
    AddVectorObs(hitSomething);
}

AddVectorObs(aimingAtObstacle);
}
                                    
                                  

What I learned

It's easy for the AI to come to the wrong conclusions about what the different observations actually means. A lot of trial and error was needed to find a balance between the observations. It's easy to "overload" the AI.

Curriculum learning where the AI goes through serveral lessons that gets gradually harder is very effective to reach the end goal in much less time than without it if the challenge is fairly hard.