04 Jun 2013

What we learnt from BGV selection

By Lily

blog spider

May has been spent going through many applications, running interviews, and we now *fingers crossed* have our ten teams! Sadly we can’t tell you who they are yet – but get ready to get excited!

So what did we learn?

  • Pre-screening is hard

Each application underwent at least 6 evaluations on f6s: the four of us from the BGV team as well as two external judges. The scores were then averaged and the teams automatically ordered by overall score. We found this system made staying consistent difficult, as the first evaluator’s score anchored the rest of us.  We wondered whether next year we might trial a ‘pick A or B’ system a la Zuckerberg facemash – that would allow evaluators to compare teams directly with each other rather than against some theoretical set of criteria. Perhaps something to try out next round…

  • Have interviews not INTERVIEWS

We tried not to think of our interviews as interviews. Rather as a chance for us to work out which of the teams we’d like to work with and who would get the best out of the programme. We wanted people coming away from interviews to feel inspired and have learned something, because we were selling them the programme as much as they were selling us their idea and team. Our aim was to be as helpful as possible and our policy was complete openness.

We looked above all for teams who were: ready to commit, open to criticism and making genuine practical progress; those who knew who was doing what but were also comfortable sharing both credit and blame among themselves.

  • Make pictures not numbers

After each session we filled out diagrams, like the one above, to assess each of the teams. Each of the arms was one of our interview criteria but rather than having the average as a number it showed it as a surface area which allowed us to immediately spot strengths and weaknesses and compare teams easily.