Image

Greater London, England, United Kingdom
6K followers 500+ connections

Image Image Image

Join to view profile

Articles by Jan

  • How to make a success story of your data science team.
    Image

    Data science resounds throughout every industry and has reached the mainstream media. I no longer have to explain what…

    Image Image
    2 Comments
  • Organising the Zoopla Hackathon
    Image

    Zoopla just ran a successful 2 day hackathon and it all started with 3 people and a dream. Wouldn’t it be great if we…

    Image Image
    3 Comments
  • How to build a Recommendation Engine quick and simple
    Image

    Part 1: an introduction, how to get to production in a week and where to go after that This article is meant to be a…

    Image
    5 Comments
  • Location Location Location
    Image

    How to create geographic area embeddings using Machine Learning and a little black magic wizardry. The Zoopla.

    Image
    3 Comments
  • Rendezvous Architecture for Data Science in Production
    Image

    Part 1: The real challenge in data science It is impossible to miss how the data field gained some new buzzwords. It…

    Image Image
    3 Comments

Activity

6K followers

See all activities

Experience & Education

  • Image

    City University London

View Jan’s full experience

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • A Reward-driven Model of Darwinian Fitness

    In Proceedings of the 7th International Joint Conference on Computational Intelligence

    In this paper we present a model that, based on the principle of total energy balance (similar to energy conservation in Physics), bridges the gap between Darwinian fitness theories and reward-driven theories of behaviour. Results show that it is possible to accommodate the reward maximization principle underlying modern approaches in behavioural reinforcement learning and traditional fitness approaches. Our framework, presented within a prey-predator model, may have important consequences in…

    In this paper we present a model that, based on the principle of total energy balance (similar to energy conservation in Physics), bridges the gap between Darwinian fitness theories and reward-driven theories of behaviour. Results show that it is possible to accommodate the reward maximization principle underlying modern approaches in behavioural reinforcement learning and traditional fitness approaches. Our framework, presented within a prey-predator model, may have important consequences in the study of behaviour.

    Other authors
    • Mark Broom
    See publication Image
  • Models of aposematism and the role of aversive learning

    City University London

    The thesis will identify open questions of interest around aposematism. In the second chapter the thesis will focus on the perspective of the prey. The introduction of a game theoretical model of co-evolution of defence and signal will be followed by an adaptation of the model for finite populations. In finite populations, investigating the co-evolution of defence and signalling requires an understanding of natural selection as well as an assessment of the effects of drift as an additional…

    The thesis will identify open questions of interest around aposematism. In the second chapter the thesis will focus on the perspective of the prey. The introduction of a game theoretical model of co-evolution of defence and signal will be followed by an adaptation of the model for finite populations. In finite populations, investigating the co-evolution of defence and signalling requires an understanding of natural selection as well as an assessment of the effects of drift as an additional force acting on stability. In the third chapter the thesis will adopt the perspective of the predator. It will introduce reinforcement learning as an normative framework of rational decision making in a changing environment. An analysis of the consequences of aposematism in combination with aversive learning on the predator’s diet and energy intake will be followed by a lifetime model of optimal foraging behaviour in the presence of aposematic prey in the fourth chapter. In the last chapter I will conclude that the predator’s aversive learning process plays a crucial role in the form and stability of aposematism. The introduction of temporal difference learning allows for a better understanding of the specific details of the predator’s role in aposematism and presents a way to take the discipline forward.

    See publication Image
  • The Evolutionary Dynamics of Aposematism: a Numerical Analysis of Co-Evolution in Finite Populations

    Cambridge University Press

    The majority of species are under predatory risk in their natural habitat and targeted by predators as part of the food web. During the evolution of ecosystems, manifold mechanisms have emerged to avoid predation. So called secondary defences, which are used after a predator has initiated prey-catching behaviour, commonly involve the expression of toxins or deterrent substances which are not observable by the predator. Hence, the possession of such secondary defence in many prey species comes…

    The majority of species are under predatory risk in their natural habitat and targeted by predators as part of the food web. During the evolution of ecosystems, manifold mechanisms have emerged to avoid predation. So called secondary defences, which are used after a predator has initiated prey-catching behaviour, commonly involve the expression of toxins or deterrent substances which are not observable by the predator. Hence, the possession of such secondary defence in many prey species comes with a specific signal of that defence (aposematism). This paper builds on the ideas of existing models of such signalling behaviour, using a model of co-evolution and generalisation of aversive information and introduces a new methodology of numerical analysis for finite populations. This new methodology significantly improves the accessibility of previous models.

    Other authors
    • Mark Broom
    See publication Image
  • The Application of Temporal Difference Learning in Optimal Diet Models

    Journal of Theoretical Biology

    An experience-based aversive learning model of foraging behaviour in uncertain environments is presented. We use Q-learning as a model-free implementation of Temporal Difference learning motivated by growing evidence for neural correlates in natural reinforcement settings. The predator has the choice of including an aposematic prey in its diet or to forage on alternative food sources. We show how the predator's foraging behaviour and energy intake depends on toxicity of the defended prey and…

    An experience-based aversive learning model of foraging behaviour in uncertain environments is presented. We use Q-learning as a model-free implementation of Temporal Difference learning motivated by growing evidence for neural correlates in natural reinforcement settings. The predator has the choice of including an aposematic prey in its diet or to forage on alternative food sources. We show how the predator's foraging behaviour and energy intake depends on toxicity of the defended prey and the presence of Batesian mimics. We introduce the precondition of exploration of the action space for successful aversion formation and show how it predicts foraging behaviour in the presence of conflicting rewards which is conditionally suboptimal in a fixed environment but allows better adaptation in changing environments.

    Other authors
    • Mark Broom
    See publication Image

Recommendations received

View Jan’s full profile

  • See who you know in common
  • Get introduced
  • Contact Jan directly
Join to view full profile
Image

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More