fbpx

Super Challenge FAQ

You asked. We answered.

We’ve been running the YeahWrite Super Challenge for about three years now (someone remind us to update this sentence) and over that time we’ve heard your questions and concerns. We can’t answer all of them, but before you hit send on that email, check this page. If you still have questions about the Super Challenge, drop us a line.

And remember: this FAQ isn’t a substitute for reading the rules for the fiction, nonfiction, or microprose Super Challenge you’re entering!

Why is the Super Challenge on the same weekend as (other competition)?
Honestly? It’s a matter of math. Our Super Challenge runs four times a year, for six weeks at a time. That’s 24 weekends, or roughly half the possible weekends in the year. Cut out major holidays in several religions, plus we can’t run on weekends with events like back-to-school and spring break so that parents can compete, or on national holidays in the US if we want people with family obligations to be able to compete, and we’re pretty locked in for possible dates. It would be a miracle if we never overlapped with any other challenges, especially since most other competitions also try to work around the holidays and major events. We do our best not to overlap with other competitions, especially if we know when they’re scheduled in advance, but sometimes we just can’t put an extra weekend in a month.
Would you really disqualify me because I wrote my story in Calibri 11 point?
Yes. Yes, we would, and it would make us extremely sad, especially if the story or essay was otherwise fantastic.

There are only two reasons to have a rule in a writing competition: to define the challenge (like “you must include this prompt in this way”) and to prevent cheating (like “don’t contact our judges about the competition”). Your anonymity and right to be read fairly are important to us. That’s why we require everyone to submit their work in the same format, with the same information, laid out the same way, in the same font, the same size.

One of the ways we do try to offset how rough it is to not advance because of a misread rule or missed prompt is to ask our judges, time permitting, to give feedback on disqualified works. That way a disqualified writer can at least reap most of the benefit of having submitted, and hopefully end up with a piece to polish for publication elsewhere.

Now I'm nervous. What is my title page supposed to look like?

Your title page should contain three (maybe four, if you add a content warning) pieces of information, and nothing else. No header, no footer, no border, no random fonts or different font sizes or colors. You can center your title, group number, and prompt or summary, if you like; it looks pretty but it’s not necessary. The specific requirements for each round’s title page will be spelled out clearly in the assignment email. Use a hard page break before continuing with your submission; not everyone’s computers paginate things the same way and you don’t want your formatting to get screwed up.

Yeah, we know it seems a little silly to have an entire FAQ about the title page, but probably 2/3 of title pages we see are nonconforming, and while judges want to be as impartial as possible, having the first thing they see look like the writer has disregarded the rules sets a negative tone for the interaction besides missing out on some easy formatting points.

The first two examples below are acceptable title pages. The third is not. (Click to enlarge.)

 

How is the competition scored?

We score on three factors:

  • The Rules – are word count, formatting, and title page correct, are prompts included, etc.
  • Technical Quality – grammar, sentence structure, spelling, punctuation, etc.
  • Substantive Quality – how well are prompts incorporated, narrative structure, voice, etc.

Each of these factors has a number of subfactors, and the overall scoresheet is weighted slightly in favor of substantive rather than purely technical skill.

Each group is scored by at least two judges. Judges score an entire group, not individual submissions. Judges receive rough guidelines about the nature and quality of a story that would receive a total score within a certain range, i.e. the top 5% of possible scores is described as “ready for publication with no further edits and likely to be nominated for an award in the appropriate genre.” This encourages normative scoring, although judges still have individual preferences. Having multiple judges helps to normalize scoring in the face of a strong judicial preference.

We do not have judges rank submissions in order of preference and assign points that way. We’ve found that that method is less objective, as well as eliminating advantages or disadvantages in combining judicial scores for works that are exceptionally good or poor. That is, if one submission is much better than another, we believe there should be more than a one-point difference between the two.

Each round is scored individually and scores do not carry forward between rounds.

 

Why don't you list scores when you release results?
Bear with us here for a second, because we promise it will make sense by the end of this section: the score doesn’t contain any useful information.

We reviewed a bunch of scoring models that other competitions use and a few that they don’t (with good reason, it turned out), and in the end we settled on just releasing the list of who is advancing to the next round because it gives writers all the information they need for the competition without inviting them to compare works in ways that don’t make sense for a competition scored in separate groups with different judges. If you want the deep dive into why, keep reading this incredibly long (it’s so long, we’re sorry) section.

Let’s do the math. Pretend that scores go from 0-100, because those are easy numbers, and that each group of writers contains ten writers. A submission got a score of 50. What does that reveal about the submission? Nothing. It doesn’t even indicate whether the writer is advancing to the next round. That 50 might be the best score in the group or it might be the worst – there’s no way to tell.

Okay. Now let’s assume that we reveal *everyone’s* scores. That 50 is the lowest score. As writers and human beings, we don’t think it’s useful (or, frankly, kind or professional) for a competition to tell everyone who was the worst writer in the group. That’s why we don’t share it. What’s more, even knowing a submission received the lowest score in its group still doesn’t tell you if it was objectively good or bad. All it says is that there were better writers in the group. The worst work in a group might still be publishable as-is. The same principle applies if that 50 was the top score.

Add another level of difficulty to the equation: every judge scores differently. We give judges suggested score ranges for quality control purposes, but one person’s “very good, I really like this” might be a 70, and another’s might be a 95. That’s why we advance the top scorers out of each group rather than the top overall scorers: so that no-one gets an unfair advantage or disadvantage from having a particularly high- or low-scoring judge. When we were deciding how to score the competition we did some trial runs with different systems. It turned out that stories would place in the same order across various judging groups, but would have averaged scores that varied by as much as 20 points (that was an old 150-point scoring system that we used for a free summer challenge a few years ago). So a second-place work might have a score of 40 in one judging group or 60 or even 80 in another, but it would still be in second place no matter who was judging. On the other hand, comparing work across groups, one group’s 60 might be comparable to another group’s 80, but there is no way of knowing that. So a lower quality submission in one group may receive a higher absolute score than a great work in another group – even though it didn’t have a higher relative score when you adjust for the curve.

Revealing the rank of each submission within a group runs into the same challenges. The top ranked writer within one group might not have advanced at all if they had been assigned to another group. The bottom ranked writer would probably prefer that we not advertise that, and again, the bottom-ranked writer in one group may be better than a much higher-ranked writer in another group.

TL;dr we don’t tell you your score – or your rank – because there’s no actual information in it that will help anyone become a better writer (that’s what feedback is for), or even reveal if a submission is high or low quality. The most that can be done with any of that information would be to compare within a specific group, which still doesn’t reveal a submission’s overall quality or even quality rank within the whole competition field.

Why don't I know who my judges are?
Judging at the Super Challenge is anonymous on both ends; Judges have no access to submissions, and receive pre-anonymized packets from our admin team. They are forbidden to discuss the work until they have turned in their scores, unless they have a question for us about whether a work conforms to the rules. The only time a work is discussed in that case is if a judge cannot locate a mandatory prompt (in which case we’ll poll the other judges in that group to see if they located the prompt, trying to err on the side of inclusion) or if a judge thinks they recognize a writer either through a detail included in an essay or through writing style. Because we do run a small community, our admin team does its best to separate writers from judges who would be that familiar with their work or personal life, and we have procedures in place to remove and replace a judge who recognizes a writer so that the writer can be judged impartially. (It doesn’t usually work the way people think; judges tend to score writers they know and like more conservatively, as a way of overcompensating. The judge knowing a writer is more likely to lower than raise the score.)

Another reason we separate and anonymize our judges is that, well, they’re giving negative feedback. Because we told them to. And negative feedback hurts, and it’s tempting for people to lash out when hurt. We’ve gotten some incredibly unprofessional emails from writers who were not prepared to receive negative feedback. We’re not willing to subject someone who has just spent an average of 5-7 hours per group judging to attack, just because they gave the feedback we asked them for. Writers should always feel free to send us substantive negative feedback about judging at superchallenge@yeahwrite.me, and if our admin team finds merit in it beyond a personal attack on the judge, we do pass that feedback along. If it’s useful in a general rather than specific sense we incorporate it in our overall judging guidelines. Except the comment that said “you are horrible and I hate you.” We didn’t put that one in, but we hope whoever it was is having a better day today than they were when they wrote it.

Why do I get my feedback before my results?
While the Super Challenge is a writing competition, YeahWrite as a community is dedicated not just to “winning” but to making better writers. If writers receive feedback after results, it’s too easy to dismiss the negatives if they advanced or the positives if they didn’t. We want everyone to have a chance to sit with the feedback not just for one round of one competition but to consider how it might affect their writing overall. For example, persistent grammatical errors will hurt a writer’s results eventually, even if they advanced in a particular round.

BUT. Don’t overthink it. No singular piece of positive or negative feedback is the reason you did or did not advance.

Let’s look at the statistics. There are a lot of columns on the judging sheet. Even if a writer loses literally all the points in one column from one judge, it’s a minor statistical bump in their overall result for that round. That’s why we judge the way we do, on many factors and with several judges: to make sure that it doesn’t knock anyone out of a round just because they ran up against someone’s pet peeve grammar error, or struggled with a specific prompt style.

Why am I getting negative feedback? I had lots of proofreaders.
OK. Take a breath. We’re writers, too, and we know how getting negative feedback feels. That’s why we advise everyone to read their negative feedback with these things in mind:

  • Each judge is required to give positive and negative feedback. So negative feedback doesn’t mean a writer is not advancing or even that they lost a ton of points over that grammatical error. It just means that we asked the judges to locate and describe something that could have been improved in every piece of writing.
  • Your proofreaders may not have been focused on the minutiae of your grammar; if you’ve got a continuity error, they’re more likely to point that out than fiddle with copyediting a verb tense that won’t even be there later.
  • Are you asking your betas/proofreaders the right questions? Sometimes taking a story or essay to an independent reader and asking “is this issue present in my work” can be more useful than just saying “is this a good story.”
  • Some issues aren’t caught at the beta/editing level because the reader believed there was something else that the writer needed to fix as a priority. Betas know that you only have 48 hours to finish a submission and that they can’t catch everything, so they often point out gaping plot holes, missed prompts, weird blocking in fight scenes, confusing dialogue, and other big issues before they focus on minutiae.
  • Your beta readers might not be professional editors. Remember, pro writers are not always pro editors. That’s why editors exist. We draw our judges from a pool of editorial and educational professionals to make sure you get high quality feedback on your work.
  • Finally, no single piece of negative feedback is the reason a work did or did not make the cut in any particular round. That feedback describes just one of the many factors that went into the score.
Why did the judge focus on [issue] in my work? It doesn't seem like a big deal.
When reading feedback – positive or negative – it’s important to keep a couple things in mind:

  • No one outstanding moment or typographical error will make or break a piece of writing, no matter how big – unless it’s that a mandatory prompt isn’t included or a similar rules violation.
  • Each judge has different priorities as a person. That means that if a piece of writing has five things going on and they’ve got about three sentences to describe it, they’re probably going to pick the one that’s most important to them or easiest to describe quickly. It doesn’t mean that’s the only thing they see, or that it’s a thing the work necessarily scored particularly high or low in.
  • Need an example? If there are 5-10 typos in a story “this story could have used a round of spellcheck” is both easy feedback to give and something that the writer should be told, even though overall it’s probably not a dealbreaker for the story. On the other hand, the story may also need a more significant developmental and organizational edit before it would be publication-ready, but there’s no way to deliver enough of that feedback in three sentences in a meaningful way. Judges tend to focus on the feedback that they can give meaningfully, and on the issues that they feel most competent to identify and help fix.
I can't find the thing a judge said was in my work!
Whether it’s comma splices or an incomplete description, we’re rarely the best placed to see issues in our own work. After all, we already know what we meant to say or describe, or what the outcome of a conversation was.

Sometimes the best way to find an error in writing is to ask someone who has no investment in the writer as a person, and ask specific questions about the work. Keep the questions neutral, and try not to prime the reader with information that isn’t in the story. Instead of asking “I described X, why did the judge say Y” try asking “when you read my description of the room, what do you ‘see’ in your mind? Where is the furniture?” Open-ended questions can help resolve where issues are present. You can also use this tactic with your beta readers, if you know you have a persistent issue. “Can you keep an eye out for word overuse, not just plot holes?”

Sometimes the right answer is that the judge did miss a word or two of a description, or preferred to shorten a sentence that works just as well long. Part of the reason that we judge on a broad range of factors is so that no singular human error will have a major impact on the overall result for a submission. No judge is out to get anyone: they don’t even know whose work they’re judging. In fact, our anonymization team doesn’t even release the names of writers internally at YeahWrite until after the first round of competition. We do try to select judges who are familiar and comfortable with the genre or style you’re being asked to write in, so that everyone goes into each round with the maximum chance of drawing a judge who is enthusiastic about reading their work.

Two of my judges said different things!
When we give submissions to our judges, we send a packet that includes their scoring information and a general framework of what we’re looking for in each category and overall. That includes, for fiction, the entire prompt received, and some guidelines for what we think is out of bounds or would result in disqualification as well as what we are looking for from the prompt or writer, which usually takes the form of an expanded version of the tips and tricks we give the writers about that prompt.

Beyond that, other than in the case of a possible disqualification, we don’t require our judges to have the same opinion on any submission. Some judges think that shorter sentences make the point better; others would prefer that you do a better job with the punctuation in your long, complex sentences. Some judges may read a description as implying one thing, or another. (Before jumping in with a “that’s not in there” ask: “can I see where the judge saw that? Is it a legitimate misinterpretation of a place I wasn’t clear?”)

Read apparently conflicting feedback with a grain of salt: is it possible that both things can be true? Sometimes one judge sees that 90% of a story’s sentences are short and choppy and suggests the writer try a more lyrical phrasing, but another focuses on one overwhelmingly long sentence and asks the writer to break up excessively long phrases.

Finally, each judge was asked to give both positive and negative feedback. In order to do that, some of them may have focused on a single incident or idea because that’s what stuck out for them in a positive or negative light. That doesn’t mean that the submission scored no points in a relevant category, or that it scored exceptionally well with one judge and not with the other. In most cases, the information in feedback is either a general overall sense of what the judge liked/disliked about the work, or is a very specific instance of an error or good technique. Neither of those types of feedback translate particularly well into (or strongly affect) a numerical score, which is another reason we like to give feedback instead of just a score.

I feel hemmed in by having to use the prompts

The challenge of a prompted, timed flash writing competition is to write a story in a finite period of time and word count that has to include certain elements. That means that in order to declare a winner we have to be able to score on all of those things: did the writer turn in a work on time that matches those rules? Shameless plug: if you want to practice for free, our weekly prompted fiction and poetry challenge is open every Monday through Wednesday. All you need is an internet connection and a place like a blog or website to host your work with a permanent URL.

If what a writer wants to do is enter a prompted, timed, flash writing competition, we think this is a great one. If what they want to do is write a story they love and enter it somewhere, they’re probably not going to be happy with the constraints of a prompted competition, and the resulting work may not score well, regardless of its inherent merit or publishability.

We want you to be happy. If what you want is freedom, we’d love to see you on our free weekly challenge grids – the nonfiction challenge offers an optional prompt, but really, you can write about whatever you like. We’d be thrilled on your behalf if you submitted your work that loosely incorporates a vague reference to something the prompt made you think of to a magazine and it got published and you made a gajillion dollars. OK, thrilled and jealous, but you know what we mean. Heck, if what you really want is solid feedback on a piece that you wrote, maybe inspired by one of our prompts and maybe not, our membership benefit editorial reviews are probably dollar for dollar a better deal for you than entering a prompted writing competition where some of the feedback is bound to be “this writer struggled to incorporate the prompt” or “the prompt in this piece didn’t really advance the plot.” We’ll happily acknowledge that’s not useful to you if what you want is a submittable piece for a magazine or anthology, and we’d love to work with you to find a way to get you the feedback that will make that piece the best it can be.

TL;dr the prompt isn’t there just to make sure a writer doesn’t submit something they wrote a year ago. It’s there because how well they incorporate it is part of the challenge. We hope you have as much fun using the prompt as we do coming up with it.

 

The judges said I went over word count, but it's under on my computer
Unfortunately, there’s no Holy Grail of word count. Different programs count words with hyphens, ellipses, and other punctuation differently. That means you might want to run your submission through a few different programs and use the one that’s least favorable to you, or give yourself more of a margin of error than one word. We don’t disqualify a submission for going just a couple words over (we’ve all suffered at the hands of word counting too) but there is an overage penalty, so play it safe.
Why don't you use Submittable (or a comparable service)?
The short answer is, because we’d have to charge an extra $15-20 per entry fee to pay for it, and we want to keep our fees low.

Entry and submission fees are a huge challenge for many writers, and we want to remain accessible to as many people as possible. Trust me; we would love to use Submittable or Moksha. On your end, it’s neat to be able to track your work. On ours, it’s neat to have an automatic confirmation and format check and anonymization that doesn’t require volunteer staff time. But raising our entry fee by 50% would mean that a lot of writers, especially in our international community (thanks, exchange rates and fees!), would be unable to participate. We had to make a choice, and we picked “a little inconvenient and a lot more available.”

Pin It on Pinterest

Share This