A Case Study in Opportunity Pursuit: Launching Quillionz

Vikas Joshi
September 25, 2018

This month has been special for us at Harbinger Group because we launched Quillionz, world’s first AI-powered platform for creating questions, assessments, and quizzes. Quillionz can be a game changer for teachers, trainers, and eLearning developers who typically put in long hours to frame good questions for testing. The cool thing about Quillionz is that it generates a host of question ideas in seconds, simply based on study materials or training content.

Throughout the pre-launch phase, it was fascinating for team Quillionz to experience the twists and turns so typical of all new product releases. It was particularly interesting to see how our Quillionz journey has been a classic practical illustration of the Opportunity Pursuit Framework. I have discussed this framework in my recent series titled ‘Five Key Techniques for Effective Pursuit of Opportunities’. Broadly speaking, the framework encompasses techniques such as connecting the dots, reframing, surfacing hunches, enactment, and effectuation.

In this post, I will illustrate how two of those techniques were at work in creating Quillionz.

How did the idea of Quillionz come about?

At Harbinger, one of our business lines is eLearning development. Here, our work puts us in touch with eLearning developers around the world. Harbinger also designs tools for eLearning developers. That means our customer support staff is in constant touch with a number of users.

Just by virtue of providing services and tools to the eLearning industry, we collect a lot of data on their work patterns and usage of our tools. When we started paying attention to where they are putting time and effort, we made a surprising discovery.

Framing questions is a big time sink for these professionals. A single good multiple-choice question may take up to 15 minutes to write, maybe more! And you need a good number of such questions to ensure learner engagement, which means a lot of time and effort for the course creator. That seemed to be a gap someone had to fill.

On another floor of the company building, we have Harbinger’s AI lab. This group of people works on natural language processing (NLP) applications. They write programs that can act as chat bots, recognize voice, classify documents, create summaries, and so forth.

So, finally someone connected the dots and said, couldn’t we use NLP to generate questions from text? What might that look like?

Can you recognize the opportunity pursuit technique at work here? That’s right – Noticing the gaps and connecting the dots!

What did we do when we got stuck?

So, we spent several weeks cranking out python code for analyzing textual content, identifying key themes in text, and generating questions of different types. Each question type had its unique challenges. For example, a multiple-choice question requires not only a correct answer, but also distractors—wrong answers that you deliberately display to challenge the learner. Designing those distractors wasn’t easy.

Other challenges presented themselves as we started generating other types of questions. When we took a stock of how things were going, we realized that not every generated question matched the quality of questions a seasoned teacher would write. Every now and then, we encountered a question that wouldn’t quite work.

We were running in circles. When we stopped and thought about it, we realized that we were hoping AI can do the whole magic for us, and therefore fully automated question generation is possible. We were getting frustrated because we were not successful in achieving it.

Now was the time to use reframing. There was no point in charging headlong, but we needed to change our frame of reference. Reframing requires that you identify the assumptions underlying your frame, question the validity of those assumptions, and consider a different frame.

So, we questioned the validity of our assumption, and it became abundantly clear that the NLP technology has simply not reached a point yet where it can fully ‘understand’ language. So, our core belief was questionable.

That pointed us to a different frame. Can we not aspire to divide the work between AI and human designers in such a way that we gain significant efficiencies, yet achieve output quality? That was the new frame we decided to adopt. We adopted a new design philosophy in which the product is AI-based, but always open to human input.

Accordingly we rebuilt the whole user interface to encourage the teacher to participate at various junctures in question generation – right from identifying key themes in content, tweaking content for clarity, to selecting appealing question ideas. If the user wants Quillionz to handle everything automatically, that option is always available. Conversely, when a user wants to influence where Quillionz focuses its questions, and how it phrases them, that is an option too.


By noticing gaps and connecting the dots, we began our quest for intelligent question generation. When we encountered a block, we pivoted by reframing the problem. We restored the human in the loop and launched an AI-based tool open to human input.

That is how Quillionz team used two of the five techniques in the Opportunity Pursuit Framework.