The AI hype is starting to find its limits. A reason for these limits comes from the lack of appropriate programming models to represent and express the concepts of learning algorithms. One example of this appears in Reinforcement Learning (RL) [1] programs, which often lack the standards and quality od regular software projets.
This problem arises, in part, from the poor tools to express and represent the programs built using RL techniques.
In this project we are going to implement an analysis framework tool to help use reason about RL programs in terms of their execution [2], being able to pause and modify program values and code, and to analyze the quality in terms of software metrics.
n.cardozo