Original Research
    Data Study
    ATS Analysis
    Resume Format

    Most Resume Formatting Advice Fails a Basic ATS Parsing Test. Here Is What Survives.

    Two-column layouts, graphics-heavy templates, and creative formatting look impressive to humans but create parsing failures that tank ATS scores before the scoring engine even runs.

    AE

    Ajusta Editorial Team

    2026-03-28 · 12 min read

    Before an ATS scoring engine evaluates your keywords, skills, or experience, something else happens first: parsing. The system has to read your resume file, extract the text, identify section boundaries, and structure the content into fields it can score against. If the parsing step fails or produces garbled output, everything downstream suffers. Keywords get lost. Section headers are misidentified. Experience entries bleed into each other.

    Most resume formatting advice focuses on what looks good to a human reader. But the formatting choices that make a resume visually appealing are often the same ones that create parsing failures. We examined this problem using data from our production pipeline, where we see the actual parsed output that ATS engines work with, not just the final score. The gap between what candidates submit and what the system actually reads is larger than most people realize.

    About the data

    This article draws on parsing behavior observed across the 22 base resumes in our production dataset, supplemented by format-specific testing we conducted to isolate the effect of structural choices on parsing quality. Our scoring engine uses a parsing layer that extracts text, identifies sections, and maps content to scoreable fields before the deterministic-v2-semantic scorer evaluates keywords, skills, experience, education, and contextual fit.

    The resumes in our dataset arrived in PDF format. We tested additional format variations (DOCX, multi-column layouts, table-based layouts) to observe parsing differences. All observations are specific to our parsing pipeline and may differ across ATS systems.

    Parsing and scoring are separate steps, and the first one matters more than you think

    ATS scoring is not a single operation. It is a pipeline with at least two stages. Stage one is parsing: extracting text from the document, identifying structural elements (section headers, bullet points, dates, job titles), and organizing the content into a structured representation. Stage two is scoring: evaluating the parsed content against the job description across the five components (keywords, skills, experience, education, contextual fit).

    Most candidates focus entirely on stage two. They optimize keywords, rephrase bullet points, and tailor their content to each job. But if stage one produces a degraded version of their resume, the stage two optimizations are working with corrupted input. You can have perfect keyword coverage in your actual document, but if the parser cannot extract those keywords because they are trapped in a text box or rendered as an image, the scoring engine will never see them.

    The ATS processing pipeline

    Stage 1: Parsing
    • Extract text from PDF/DOCX
    • Identify section boundaries
    • Detect dates, titles, entities
    • Map content to structured fields
    • Handle layout and formatting

    Failure here corrupts everything downstream

    Stage 2: Scoring
    • Keywords (40% weight)
    • Skills assessment (25%)
    • Experience evaluation (15%)
    • Education matching (10%)
    • Contextual fit (10%)

    Only as good as the parsed input it receives

    The formatting choices that cause the most parsing damage

    Not all formatting problems are equal. Some cause minor information loss. Others render the resume nearly unreadable to the parser. Based on what we have observed across our production pipeline and targeted format testing, we can rank the most common formatting choices by their impact on parsing quality.

    Formatting choices ranked by parsing risk

    Multi-column layoutsHigh risk

    Text from parallel columns gets merged into a single stream, scrambling the reading order. Section headers from column A appear next to bullet points from column B.

    Prevalence: Very common in designer templates

    Table-based layoutsHigh risk

    Tables are designed for data grids, not flowing text. Parsers often read cell-by-cell rather than row-by-row, breaking sentence continuity and misidentifying content boundaries.

    Prevalence: Common in template marketplaces

    Text embedded in images or graphicsCritical risk

    Completely invisible to text parsers. Any content rendered as a graphic (skill bars, infographic sections, icons with text) simply does not exist in the parsed output.

    Prevalence: Common in creative/design resumes

    Custom or decorative fontsMedium risk

    Most parsers handle standard fonts well. Unusual fonts with custom character mappings can cause individual characters to be misread, producing garbled text in the parsed output.

    Prevalence: Moderate

    Header/footer contentMedium risk

    Many parsers strip or ignore header/footer regions. Contact information, names, or key details placed in these areas may not make it into the parsed content.

    Prevalence: Moderate

    Non-standard section headingsLow-Medium risk

    Parsers look for common headings like "Experience" or "Education". Creative alternatives like "My Journey" or "What I Bring" may not be recognized as section boundaries.

    Prevalence: Common in informal/startup-style resumes

    Single-column, standard formattingLow risk

    Minimal parsing issues. Text flows in reading order. Section boundaries are clear. Content maps predictably to structured fields.

    Prevalence: Standard in professional contexts

    The pattern is clear: the more visually creative the formatting, the higher the parsing risk. This creates an uncomfortable tension for candidates. The templates that stand out on a computer screen are often the ones that perform worst through ATS parsing. And because parsing failures are invisible to the candidate (you never see the parsed output), the damage goes undetected until the rejection arrives.

    How parsing damage cascades into scoring

    Parsing problems do not just cause generic "lower scores." They damage specific scoring components in predictable ways. Understanding which components are most vulnerable helps explain why two candidates with similar qualifications can receive dramatically different scores based solely on their resume formatting.

    Scoring component vulnerability to parsing failures

    How each scoring component is affected when the parser produces degraded output from a poorly-formatted resume.

    Keywords (40%)Very high

    Keywords require exact or near-exact text matching. If the parser garbles text, drops words, or merges content from different sections, keyword matches fail silently. A missparsed word is an absent word as far as the scorer is concerned.

    Skills (25%)High

    Skills assessment requires context around skill mentions. When bullet points get merged or scrambled, the contextual evidence of skill application is destroyed even if the skill name itself survives parsing.

    Experience (15%)Moderate

    Experience scoring relies on dates, job titles, and career progression. These are typically formatted consistently and survive most parsing problems. But table-based layouts can scramble date-title associations.

    Education (10%)Low-Moderate

    Education sections are usually short and structured. Degree names, institution names, and graduation dates are relatively parsing-resistant. The main risk is when education is placed in a sidebar column.

    Contextual fit (10%)Moderate

    Contextual fit evaluates overall domain alignment. It is somewhat resilient to individual parsing errors because it looks at the resume holistically, but severe parsing damage reduces the available signal.

    The vulnerability pattern mirrors the weight distribution. Keywords, carrying the highest weight at 40%, are also the most vulnerable to parsing failures. This is particularly damaging because, as we showed in our before-and-after analysis, keyword improvement is the primary mechanism through which optimization works. If parsing has already corrupted the keyword layer, optimization has less material to work with.

    What a parsing-safe resume actually looks like

    The resumes in our production dataset that scored best before optimization shared structural characteristics that had nothing to do with their content quality. They were formatted in ways that minimized parsing ambiguity. Here is what they had in common.

    Structural characteristics of high-parsing-quality resumes

    Single-column layout
    Content flows in one direction. No ambiguity about reading order.
    Standard section headings
    "Experience", "Education", "Skills" rather than creative alternatives.
    Plain text content
    No images, graphics, charts, or embedded visual elements.
    Consistent date formatting
    Dates in the same format throughout (e.g., "Jan 2022 - Present").
    Standard bullet points
    Simple round bullets or dashes, not custom symbols or icons.
    PDF from a text editor
    Exported from Word, Google Docs, or LaTeX. Not saved from a design tool.
    Contact info in body text
    Not in headers, footers, text boxes, or sidebar columns.
    No table-based structure
    Content organized with headings and paragraphs, not table cells.

    None of these characteristics make a resume look better to a human recruiter. Some of them actively make it look plainer. But they ensure that the parser delivers a clean, accurate representation of the resume's content to the scoring engine. In ATS processing, boring formatting is reliable formatting.

    The score impact of format choices in our data

    Among the 22 base resumes in our dataset, the ones with clean single-column formatting consistently outperformed those with more complex layouts on the keyword component, even when their actual keyword coverage was comparable. This is not a content effect. It is a parsing effect. The well-formatted resumes simply delivered more of their content to the scorer intact.

    We also tested reformatting several resumes from their original complex layouts into clean single-column versions, keeping the content identical. The results were consistent: the reformatted versions scored higher on keywords, and in some cases the improvement was substantial enough to move the overall score up by several points before any content optimization was applied.

    Score impact of format correction (same content, different format)

    Resumes reformatted from complex layouts to clean single-column, with identical content. Score changes are entirely due to improved parsing.

    Resume with two-column sidebar+6 overall
    Keyword score
    14
    29

    Sidebar skills were being parsed as part of adjacent experience bullets

    Resume with table-based layout+5 overall
    Keyword score
    8
    22

    Table cells were being read in wrong order, scrambling keyword context

    Resume with header contact info+2 overall
    Keyword score
    31
    34

    Name and title in header were invisible to parser, affecting contextual scoring

    Resume with graphic skill bars+6 overall
    Keyword score
    11
    26

    Skill names rendered as images were invisible. Replaced with plain text list

    Original formatAfter reformatting

    The two-column sidebar case is instructive. The candidate had listed relevant technical skills in a sidebar column. When the parser processed the resume, it merged the sidebar text with the main column, producing jumbled output where "Python" appeared mid-sentence in an unrelated experience bullet. The keyword scorer could not match "Python" because it was no longer in a recognizable context. Simply moving the skills to a dedicated section in the main column, with no other changes, boosted the keyword score from 14 to 29.

    The file format question: PDF versus DOCX

    This is one of the most frequently asked questions in resume optimization, and the answer is more nuanced than the usual advice suggests. Both PDF and DOCX are acceptable for most ATS systems in 2026. But they have different parsing characteristics that matter in edge cases.

    PDF
    Preserves visual formatting exactly
    Cannot be accidentally edited by recipient
    Text extraction quality varies by how the PDF was created
    PDFs from design tools (Canva, Figma) often have poor text layers
    Best when: exported from Word, Google Docs, or LaTeX with a proper text layer.
    DOCX
    Text content is directly accessible without extraction
    Structural elements (headings, lists) are semantically tagged
    Formatting may shift between different versions of Word
    Tables and text boxes in DOCX have the same parsing risks as in PDF
    Best when: you need maximum parsing reliability and the job posting specifically requests this format.

    In practice, the difference between a well-made PDF and a well-made DOCX is minimal for parsing purposes. The problems arise from how the document was created, not which format it was saved in. A PDF exported from Google Docs with single-column formatting will parse cleanly. A PDF exported from Canva with multi-column layouts, custom fonts, and graphic elements will not. The file extension is less important than the creation method.

    The format decision framework

    Given the data, the formatting decision comes down to a simple question: are you optimizing for human eyes first or ATS parsing first? In most hiring pipelines today, the ATS sees your resume before any human does. That suggests a clear priority order.

    1
    Start with a clean, single-column layout

    This eliminates the highest-risk parsing failures. Two-column and table-based layouts cause the most damage to keyword and skills scoring.

    2
    Use standard section headings

    "Experience", "Education", "Skills", "Summary" are universally recognized by parsers. Creative alternatives risk being misclassified or ignored.

    3
    Keep all content as selectable text

    If you cannot highlight and copy a piece of text in your resume, the parser cannot read it either. No images, no graphic skill bars, no infographic sections.

    4
    Export from a text editor, not a design tool

    Google Docs, Microsoft Word, and LaTeX produce PDFs with clean text layers. Canva, Figma, and similar design tools often produce PDFs where text is partly or fully embedded as graphics.

    5
    Test your resume's parsed output

    Copy-paste your resume into a plain text editor. If the text is garbled, out of order, or missing sections, the ATS parser will have similar problems.

    Step five is the most practical advice in this article, and it is something almost no one does. Open your resume in Preview or Adobe Reader, select all text, and paste it into Notepad or a plain text editor. What you see is roughly what the parser will see. If your skills section is jumbled, if dates appear next to the wrong job titles, or if entire sections are missing, you have a formatting problem that will affect your ATS score before you write a single word.

    How this connects to our broader scoring research

    In our previous articles, we have documented the scoring patterns that emerge from scoring the same resume against different jobs, the mechanics of optimization, and the differences across career levels. All of that research assumes clean parsing. The scores we reported came from resumes that had been properly parsed.

    For candidates whose resumes have formatting issues, the actual scores will be lower than what our research would predict for their content quality. The score gap between a well-formatted and poorly-formatted version of the same resume is not a content problem or a keyword problem. It is a parsing problem, and it happens before the scoring engine even runs.

    Fixing formatting is therefore the highest-leverage, lowest-effort change most candidates can make. It costs nothing, requires no content changes, and removes a silent penalty that may be suppressing their scores by several points. For candidates who have already optimized their content and are wondering why their scores are not improving, the format is the first thing to check.

    Full methodology

    Dataset: 22 base resumes from Ajusta's production pipeline, all in PDF format. Supplemented by controlled format tests where we reformatted selected resumes from complex layouts to single-column while keeping content identical.

    Parsing observation: We examined the parsed text output produced by our parsing layer before it reaches the deterministic-v2-semantic scorer. Parsing quality was evaluated by comparing the parsed output to the original document content and noting discrepancies: missing text, reordered content, merged sections, and garbled characters.

    Score impact measurement: For reformatting tests, we scored the original and reformatted versions against the same job description and compared component-level scores. Content was verified to be identical before and after reformatting.

    Limitations: Our observations are specific to our parsing pipeline. Different ATS systems use different parsers with different strengths and weaknesses. However, the fundamental challenges of parsing multi-column layouts, tables, and image-based content are common across most commercial ATS systems.

    Check if your resume format is costing you points

    Ajusta's scoring engine shows you how your resume is parsed and scored, component by component. If formatting is suppressing your scores, you will see it in the keyword breakdown before any optimization is applied.

    Try Ajusta free
    AE

    Ajusta Editorial Team

    ATS Research & Product Education

    We analyze ATS engines, hiring data, and optimization patterns to help job seekers land more interviews with authentic, data-backed advice.

    More from Ajusta Blog

    500 FREE CREDITS - NO CARD REQUIRED

    Ready to Optimize Your Resume?

    Join thousands landing their dream roles with AI-powered optimization. Get 500 free credits instantly.

    5-second YOLO optimization · Chrome extension included · No credit card