from Guide to the PhD on Feb 5, 2023

Is my project paper-ready?

There is very rarely a straight path to a paper. Instead, the path to a successful project is riddled with roadblocks, twists and turns — all of which shape the final form for your project.

Ever wish you were fully ready for a paper deadline? Nice in theory, but let's be honest: That only happens when you're done with the deadline. You'll often need to assess paper-readiness with very few experimental results, if any.

This is a guide to assessing how paper-ready your project currently is. It's a daunting task but to make your gut feeling more concrete, there are a few guidelines.

How to find the "perfect" idea. (Don't.)

Papers are rarely a straight and clear shot: The first idea will fail, the first experiments will be inconclusive, and the first talk will be a mess. These aren't rookie mistakes; it's a part of iterating on your idea.

Unfortunately, at these stages, many papers are killed out of boredom with an idea. Many others are "scooped," and yet others are abandoned for no particular reason. Sad as it is, this is the norm. It's important to realize that if any of these scenarios is yours, it's normal.

The exception — and the papers you strive for — are those borne out of a unique twist. These twists can be a result of reading an insightful paper, highly-similar recent work, or a random spark of inspiration. High-profile papers like BERT, stable diffusion, ViT look like they're great ideas spun out of a vacuum, but this simply isn't true.

Said in a more cliché way, exceptions are borne out of necessity, and the high-impact papers you see are the result of many coincidental forces — not the author's intent from day 0. Knowing this, the most critical part is knowing how to refine ideas, not how to find perfect ideas at the outset.

Do you have a challenge and hypothesis?

With that said, your project's core idea needs to follow a few basic principles to be able to evolve. There's no need to stay married to a project, but you should be deliberate in defining these and — if need be — redefining them later:

  1. Solves a technical challenge. Know the specific problem you're trying to tackle. There are two simple ways to identify challenges:

    1. Challenges that previous papers identify as worthwhile to tackle. Especially with accepted papers, these are problems that others have agreed are important to solve.
    2. Identify problems with previous approaches. These may even possibly be a problem that a large body of work misses. The broader a convention you defy, the more impactful this problem statement becomes.
  2. Based on a hypothesis you believe in. This hypothesis should be an insight that previous works miss. The critical part is defining a hypothesis correctly — not whether it's right or wrong, as either can be insightful. If you believe in the hypothesis strongly, either:

    1. Your intuition is wrong, and you learn something new. This itself could be an insight to capitalize on. This is important: It eliminates directions you should not explore.
    2. Your intuition is right, and the proceeds as normal. You have many options at this point to explore further. Given this is the rarer of the two, I'll skim over this case.

Here are two papers that exemplify the above. A paper that effectively identifies broken convention is Towards Streaming Perception, where the authors realized that previous works were evaluating real-time models on videos incorrectly.

To date, this paper has influenced evaluation metrics at many companies deploying real-time video models. A second exemplary paper is Sanity Checks for Saliency Maps, which instead capitalizes on a negative result. This paper shows that previous saliency maps fail basic sanity checks.

A large number of papers now include these sanity checks in their experiments, a largely academic success. For more examples of challenges and hypotheses, as well as the research "story" they culminate in, see What defines a "good" researcher?

How to evolve your project idea

Two of the most common obstacles for your project idea are loosely based off of the above criteria. In fact, almost any project idea problem is related to one of these two criteria failing:

  1. Challenge is unclear or unstated. Try explaining the problem for your project to someone else. If you can't convey the difficult and importance of the problem, the problem is is likely ill-defined or not defined at all.
  2. Hypothesis is not one you believe in. Or, more commonly, the hypothesis is not fully distilled. Instead, it's a surface-level bundle of ideas, and there are several core ideas.

    1. For example, say you believe your architecture is going to improve accuracy. This isn't a hypothesis. Why should your architecture achieve higher accuracy?
    2. Maybe previous methods only operate on inputs from left-to-input, and your method operates on inputs in both directions, left-to-right and right-to-left. This is still not an insight. Why does this difference matter for accuracy?
    3. The insight is that text has bidirectional relationships, so the model too should model bidirectional relationships in text. Now, there's reason to believe this insight can result in higher performance. This was the motivation for BERT.

Note that these two downfalls are key. Over time, your project may evolve, but define your challenge and hypothesis regardless, so you can deliberately evolve your project as need be. Here are factors beyond your control that may cause this:

  1. You find related work "scooping" your idea: This happens for fast-moving and hot topics all the time. Got scooped by another paper? Start by asking what's different about your method and theirs; critically, identify differences that matter for accuracy or efficiency. Changing addition to a multiplication is uninteresting. However, if your architecture doesn't use skip connections and achieves the same accuracy (a la RepVGG), your model could achieve lower latency.
  2. Experiments disagree with your hypothesis: One correct approach at this point is to design more "fail fast" experiments to determine where your hypothesis is broken. For example, say you train a segmentation model on everyday objects and find its accuracy suffers horribly. One way to decouple the problem is by considering: (a) Is the problem the segmentation model? Or (b) is the problem the segmentation labels? To decouple the problem, visualize predictions and labels, to see which is wrong. For example, COCO bicycle segmentation labels are notoriously bad1.

For both obstacles, one common approach is to narrow down or change the "challenge," looking for a problem domain where your hypothesis is true. This is a dangerous game to play, one with endless rabbit trails — as the old saying goes, you're running around with a hammer looking for a nail.

If you absolutely must do this, start with the domain the method should most obviously work in — one where failure would convince you the hypothesis is false. Note this approach also spawned a large number of un-insightful <hot method> for <application> papers.

Got scooped by another paper? Start by asking what's different about your method and theirs; critically, identify differences that matter for accuracy or efficiency.

One paper that famously succeeded despite being beaten to the punch was ResNet, published in Dec 2015, 8 months after a highly-similar HighwayNet, published in May 2015. The motivation, methods, and results were all similar: Both address the same problem of deeper networks plateauing in accuracy, the same task of image classification, and the same idea of multi-branch neural networks.

Despite solving the same challenge 8 months later, ResNet has been cited 150,000+ times, 30x more than HighwayNet's 5,000+ citations. This is largely due to the simplicity and ease-of-implementation for ResNet; you can see for yourself how else ResNet changed its storyline, by simplifying the method.

Do you have the minimum viable result?

At least 6 weeks in advance of your paper deadline, you should ask yourself: Have I answered my paper's core hypothesis and achieved a minimum viable result? A "minimum viable result" shows your insight or hypothesis is true. If you've reached this step, you have the punchline.

However, you still need the scaffolding around that idea. You need to validate against the right baselines, check performance across a reasonable number of scenarios, and ensure there aren't obvious baselines you've forgotten. If you think there's a fighting chance of submission, start writing immediately:

  1. You'll need a paper to submit anyways. Even if you finish experimentation before the deadline, no paper means no submission. Preparing a submission should occur concurrently, while experiments are running, as you'll need a paper to submit.
  2. Story is more important than experiments. Starting on the paper writing helps you to start iterating on and refining the motivation for your project — both how you communicate the problem statement and how you communicate the hypothesis. Communicating the arc of your storyline, from problem to insight to hypothesis to method, is more important than experiments. Experimental results alone rarely stand on their own.
  3. Paper writing will help you focus your experiments. In identifying the crux of your story, you also identify which arguments are quintessential to make. This can help you focus your experiments on arguments you need to make, as well. Note that giving talks and presenting your work, even before its finished, can have the same effect.

Get started on paper writing, and even in the worst-case scenario, you've simply gotten a head start on the next deadline. Here's a summary flowchart for paper-readiness. Right now, pick a project and determine where it is in this flowchart, even if you're not planning on submitting immediately. This tells you what your next objective is:

flowchart LR
PR(Paper ready?)
CC(Clear challenge?)
N1(No, find one)
Y1(Yes)
PR --- CC
CC --- N1
CC --- Y1
CH(Clear hypothesis?)
N2(No, find one)
Y2(Yes)
PR --- CH
CH --- Y2
CH --- N2
MVR(Minimum viable result?)
Y1 --- MVR
Y2 --- MVR
N3(No, dont push)
Y3(Yes, push)
MVR --- N3
MVR --- Y3

Not paper ready?

If you're conducting research without an accessible mentor, let me pass on some advice that my adviser gave me.

Just over a day before a deadline, my adviser Joey told me: Don't let the deadline push you into publishing subpar research. Instead of maximizing stress by pursuing deadlines, do good research, arxiv when you're ready, and simply submit to the next available conference. In AI/ML, this is especially true, with conference deadlines every 3-4 months.

Being stubborn, I've actually received this advice from him many times throughout my PhD. If you're now in this position — either you know you're pushing towards a deadline you won't make, or you've come to terms with not being able to make it — it's perfectly okay. Stepping back sooner than later will certainly save you some sanity. There are frequent deadlines, and you can simply shoot for the next one.

Don't let the deadline push you into publishing subpar research. Instead of maximizing stress by pursuing deadlines, do good research, arxiv when you're ready, and simply submit to the next available conference.

This is not to say you can and should keep avoiding deadlines. However, I've never regretted spending another 3 months improving the robustness of a paper. Think of it as an extreme deadline extension. Rather than completely putting the paper aside, work on it at a more sustainable pace2.

Present. Present. Present.

As I mentioned above and in What I learned in my PhD. a big part of conducting research is presenting your challenge, hypothesis, and method over and over again. Present to anyone that'll listen, and iterate. Feedback could be as simple as lack of interest from your close friends.

If you've decided to pursue the deadline, informally discuss the story with your colleagues. Run the abstract by your mentors. If you decided to pursue the next deadline, prepare more formal talks for larger groups. See what holes your audience pokes, and see what questions you get.

This feedback doesn't stop at the paper submission either. After submission, continue to present your work, refining your storytelling skills for the next project.

In sum, use this guide to determine whether or not to submit. Once you've decided, push if you decided to, but if you've decided against it, don't beat yourself up. Take a break if you're burned out. I didn't write this guide to dissuade you from submitting but to save you from avoidable, pointless all-nighters.


back to Guide to the PhD



  1. To be fair, it's equally well-known that segmentation labels are expensive to obtain. Even worse, bicycles are difficult to annotate to begin with. 

  2. Pushing for a deadline is not necessarily bad either, even if you know you won't make it. However, being honest with yourself about your chances of submission is important. I draw the line in the sand at all-nighters. If you know you won't make it, don't pull all-nighters. Try not to give up exercise, healthy eating, and sleep at all. You'll need your sanity for the next deadline.