Starting the Year With Comprehensible Input

We’re about nine weeks into my Year 8 beginner Japanese course, and I figured some of you might be curious how they go with Comprehensible Input. What happens when you actually throw out vocab lists, drills, rote memorization and explicitly taught content and structures? Can you pick up words and structures from stories, visuals, language games and just plain old interactions? 

Before we get started, if you’re curious about what I do, I think I’ve been pretty transparent up until now. You can find all my programs here, you can see the activities I play here and the stories I tell too. So with that all said and done, let’s take a quick look at how my students have been doing.


Week 4: Early Days, Modest Gains

Four weeks in, I started collecting vocabulary data - not through a formal quiz, but through various Gimkit games. In all honesty, I’ve always found my students did worst at these sorts of assessments as they’re not exactly taught for them. Anyway, students were given 10 minutes for each quiz every two weeks.

At Week 4, we’d used well over 30 words and structures across different contexts. But, a part of the idea with this sort of approach in the classroom is to ‘shelter vocabulary’ and to push grammatical structures (although the quizzes didn’t include everything I’d introduced). So in week 4, on average, students collectively answered 428 vocab-related questions, with 62.6% accuracy. Not stunning, but not terrible either - especially considering they were completely blindsided by this activity and had never once been asked to memorize any of these words. These words weren’t taught, they just showed up in stories, conversations, games, drawings, and daily class routines. 

Week 4 was just a hint of things to come. This just the number of questions students answered in just one 10-minute Gimkit game, but this number would only grow with their accuracy. BTW, the massive jump in questions answered in Week 8 is likely due to the game we played. Which was less of a game and more straight up questions, specifically it was the ‘Classic’/Tycoon Gimkit game.

Week 6: Natural Reinforcement Takes Hold

Two weeks later, I repeated the same kind of check, with around 48 words this time. I waited until the last possible moment in the week too, I think it was about 2:50pm last period of the day and everyone was more than happy to play a game of Gimkit. Accuracy rose from 62.6% to 70.5%, even though the number of total questions answered was slightly lower that week (around 340), probably due to lesson timing.

It should be made clear at this point though, that students had never been told to “study” these words. They don’t write them down in their books, I don’t use flashcards and we never go over them. They had just heard them - again and again - in different contexts, over time. I get bored very quickly in class too, so we don’t always do the same games and activities either. These aren’t all the words we used, in fact, I forgot to include (and still have) a few of the Super 7 even. Also, in case you’re curious, this class isn’t streamed, it’s mixed ability. Also we live in a rural, lower socio-economic area. I typically see them twice a week too.

64% accuracy in Week 4.

74.5% accuracy in Week 6.

83.1% accuracy by Week 8!

77.2% accuracy in Week 9!

Week 8: It All Comes Together

By the end of Week 8, it is clear that something has shifted. Students answered over 1,200 vocabulary-related questions, with an overall accuracy of 83.1%. That’s three times the question volume of Week 4, with a 20-point gain in accuracy. Some of the biggest improvements were in words that students struggled with early on. For example:

  • “sai” (age) started at 25% and rose to 88%

  • “ookii” (big) jumped from 31% to 89%

  • “suki” (like) climbed from 42% to 95%

I think these numbers show a broader shift in comprehension, a sign that students were truly acquiring these pieces of language, not memorizing for a test.

Week 9: Retention

By Week 9, the story takes an interesting turn. I added a batch of new vocabulary and the results were exactly what you’d expect in an acquisition-focused classroom: a small dip in accuracy, but continued high engagement.

In the game, students answered over 500 questions at an average accuracy level of 77.2% - a slight drop from Week 8’s peak of 83.4%, but still miles ahead of where we started in Week 4 (63.3%). But I don’t look at that little dip as a failure, I think it really shows that acquisitions isn’t always linear.

By this point, students weren’t just recognizing the language from stories. Observationally, I can say that they are quite confident with a lot of words and I no longer have to gesture them or illustrate them. Some are confidently interacting with them, re-encountering old structures and building bridges between the familiar and the new, all without vocab lists, drills or memorization tasks. I fed all the Gimkit data into ChatGPT and asked it to generate a heatmap of how students were doing with each word across the weeks. Have a look for yourself:

I should point out that this isn’t anywhere near all the words and structures we covered by Week 9, but this was some words that I added to the quiz after particular points.

What About the Students Themselves?

Looking at individual progress made this even more obvious.

Student A started at 43% in Week 4. By Week 8, they had reached 74% - and they answered nearly double the questions by then.

Student B improved from 52% to 73%, while also increasing their response volume from 33 to 70.

Student C was nearly invisible in Week 6, answering just 10 questions. But by Week 8, they responded to 118 prompts and got 95% of them right.

They didn’t have sets of words to study at home and they weren’t being taught explicitly. By the way, in case you’re wondering, I have 26 students in this class and this is just a sample.

Acquisition looks Different

I think this might be a reasonably accurate picture of what language acquisition looks like in practice. Learning isn’t immediately obvious and you can see that it goes up and down inexplicably. I can tell you that I still have students tell me that they haven’t “learned” anything in my classes, because one of the fascinating things about acquisition is that it doesn’t feel like “learning” to the students. They often don’t realize they’ve picked something up because the knowledge isn’t conscious or rule-based. But when they answer 500 questions with 77% accuracy using words I never explicitly taught, that’s acquisition at work. It’s in their system, even if they couldn’t explain how it got there. But I don’t think there’s any denying that acquisition is a thing, each of us does that throughout our lives. 

Now… could I get these results using more traditional methods of teaching, like grammar translation and language ‘learning’ instead of ‘acquisition’? Absolutely. And I’m absolutely positive many would look at the amount of words/structures the students covered and think, you could do that in a quarter of the time (and I agree). But the real question is, what’s the difference between ‘acquired’ and ‘learned’ knowledge. There is a good article here by Eric Richard on the difference between the two - if you’re curious I’d have a read of it in full. 

Learned knowledge is:

  • conscious and explicit

  • The student knows about the language: they can explain grammar rules, conjugate a verb table, or define a term.

  • But when speaking, they sometimes have to pause, think, and recall the rule before applying it

Acquired knowledge is:

  • subconscious and implicit

  • The student can use the language without necessarily explaining how or why it works.

  • This knowledge can comes out automatically, especially in conversation

  • It’s like being able to ride a bike, your body just gets used to the movement

Final Thoughts

I let language be experienced, not explained. An example of this - Can you believe that we’re like 8-9 weeks in and I had a student in one of my classes ask for the first time exactly what ‘wa’ does in a sentence (I did explain it ages ago, but I think we’ve come full circle and that student is trying to make sense of it all now). Up until now they’d just been using it and accepting it as a part of sentences, but in this moment I did a ‘pop up’ explanation and told them explicitly how it functioned.

And just to be clear, I’m not against explicit teaching - I use it too, when it makes sense. There are some times when I only teach explicitly e.g. introduction of Hiragana and Katakana (sometimes), grammar pop ups, some core grammatical concepts like Te-form and Plain Form etc. I’m a big believer that we should all have more than one tool in our belts. No singular teaching method will ever be perfect in all situations. Every approach has its strengths and I see a part of our job as educators to stay informed and flexible. That said, with the current push toward explicit teaching in education, I think it’s only fair to show what the other side looks like  - not just in theory, but in practice.

I actually have some other interesting evidence to share, but I’m still assembling it. I’ve been having my students do a ‘brain dump’ every month, so I can measure their development across the year. Also, students write an English summary after each story we hear, but some students have been attempting these in Japanese, so they would be interesting to look at too. Here is a photo I took of one student’s work that really got my attention - I found it interesting that they had begun using structures I was only starting to ‘hint’ at in class. Like, quoting speech, adverbs, the particle ‘mo’ and incorporating verbs into sentences. They still have a few things obviously to ‘pick up’, but I think this is pretty damn great for about Week 6? I think. And with enough exposure to the correct grammar usage, like when to use ‘imasu’ and when to use ‘arimasu’, I think it’s safe to say they will self-correct.

This was a Beginning Middle and End (BME) summary of the 3rd story from my book, The Story Pit.

Before I go, I’m not trying to redefine anyone’s method, or change what you do, I’m just sharing what happened when I leaned into story-first, meaning-first input and what the data showed me along the way (well, so far). And to be honest, a lot of my teaching choices come down to what makes my life easier. It’s interesting data though don’t you think? Parent-teacher night is certainly going to be interesting this year.

Next
Next

Choose Your Own Adventure