AI Screenr
Product Walkthrough

How AI Screenr Works

How AI Screenr works: configure a role in one click, candidates interview async with voice AI, scored reports in 2 minutes. 3 free interviews, no credit card.

Try Free
By AI Screenr Team·

Trusted by innovative companies

eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela
eprovement
Jobrela

Three steps from job posting to ranked shortlist

The AI Screenr workflow at a glance — no integration, no scheduling.

1

Configure the role

Paste a job description for one-click AI setup (or build it manually in ~5 minutes). AI Screenr extracts skills, knockouts, rubric weights, and up to 5 structured question blueprints. Edit anything before launch.

2

Share the interview link

Drop one link into your ATS, email, SMS, or job board. Candidates interview async on any device — no scheduling, no app install, no account creation. Typical duration 15–25 minutes (configurable 5–60).

3

Review the scored shortlist

Within 2 minutes of each interview, a rubric-backed report lands in your dashboard with a Strong Yes / Yes / Maybe / No recommendation, dimension scores, evidence quotes, and ranked shortlist.

See a live walkthrough in under 5 minutes with 3 free interviews.

Try Free — No Credit Card

Step 1 in action — a realistic job configuration

What AI Screenr produces after one-click setup from a pasted job description. Every field is editable before you launch the interview link.

Sample AI Screenr Job Configuration

Senior Product Manager (B2B SaaS)

Job Details

Basic information about the position. The AI reads all of this to calibrate questions and evaluate candidates.

Job Title

Senior Product Manager (B2B SaaS)

Job Family

Product

Product roles emphasise prioritization, discovery, and stakeholder reasoning — the AI calibrates follow-ups around judgement and trade-offs rather than execution detail.

Interview Template

Competency-Based Screen

Allows up to 4 follow-ups per question. Pushes on trade-off reasoning and demands specific past examples — surfaces the difference between experienced PMs and PM-adjacent candidates.

Job Description

We're hiring a senior product manager to own a core B2B workflow surface. You will partner with engineering, design, and go-to-market to discover real customer problems, prioritise ruthlessly, and ship outcomes — not outputs.

Normalized Role Brief

Senior product manager with 5+ years of B2B SaaS experience, a track record of owning an end-to-end product area, and the judgement to make scoping calls without a playbook. Writing-first, opinionated, calm under ambiguity.

Concise 2-3 sentence summary the AI uses instead of the full description for question generation.

Skills

Required skills are assessed with dedicated questions. Preferred skills earn bonus credit when demonstrated.

Required Skills

Discovery: customer interviews, opportunity sizingPrioritization frameworks (RICE, Kano, or equivalent applied)Writing — PRDs, decision memos, review narrativesCross-functional leadership (eng + design + GTM)Quantitative instinct (funnel, retention, activation)Stakeholder management with exec visibility

The AI asks targeted questions about each required skill. 3-7 recommended.

Preferred Skills

Prior experience in B2B SaaS workflow toolsExperience with usage-based or self-serve pricingDesign-partner program experienceExposure to platform / API product work

Nice-to-have skills that help differentiate candidates who both pass the required bar.

Must-Have Competencies

Behavioral/functional capabilities evaluated pass/fail. The AI uses behavioral questions ('Tell me about a time when...').

Prioritization Judgementadvanced

Defends trade-off decisions with evidence; can articulate what was cut and why, not only what shipped

Discovery Disciplineadvanced

Runs real customer conversations, not surveys; distinguishes surface complaints from underlying jobs-to-be-done

Cross-Functional Leadershipintermediate

Moves engineering + design + GTM together without authority; resolves scoping conflicts with written analysis

Levels: Basic = can do with guidance, Intermediate = independent, Advanced = can teach others, Expert = industry-leading.

Knockout Criteria

Automatic disqualifiers. If triggered, candidate receives 'No' recommendation regardless of other scores.

B2B Experience

Fail if: No prior B2B SaaS product experience at any scale

This role requires B2B-specific judgement — buyer ≠ user, procurement cycles, seat economics

Tenure

Fail if: Less than 5 years of product management experience

Senior level — needs to own an area without scaffolding from day one

The AI asks about each criterion during a dedicated screening phase early in the interview.

Custom Interview Questions

Mandatory questions asked in order before general exploration. The AI follows up if answers are vague.

Q1

Walk me through the most consequential prioritization call you have made in the last 12 months. What did you cut, what did you keep, and what do you know now that you didn't know then?

Q2

Describe a time your discovery work changed the direction of a planned feature. What did you hear, how did you validate it, and what shipped instead?

Q3

Tell me about a cross-functional disagreement you resolved. What was the disagreement, what did you write down, and how did the team decide?

Open-ended questions work best. The AI automatically follows up if answers are vague or incomplete.

Question Blueprints

Structured deep-dive questions with pre-written follow-ups ensuring consistent, fair evaluation across all candidates.

B1. Design the first version of a usage-based billing upgrade prompt for our product. Walk me through the user segment, trigger, copy, and success metric.

Knowledge areas to assess:

user segmentation and entitlementstrigger event choice and timingcopy and CTA strategysuccess metric and guardrail metricfailure modes and rollback plan

Pre-written follow-ups:

F1. How would you avoid prompting users who are already in a renewal conversation?

F2. What is your guardrail metric, and at what threshold would you pull the prompt?

F3. How do you decide between in-product prompt vs. email vs. account-manager outreach?

B2. A top-10 customer asks for an enterprise feature that would cost 2 engineer-quarters and would not benefit any other customer. Walk me through how you decide.

Knowledge areas to assess:

account value vs. platform value framingcost-of-distraction accountingprecedent and escalation managementalternatives (manual, services, contract-only)stakeholder communication

Pre-written follow-ups:

F1. How do you communicate the decision to the account team?

F2. What if the customer threatens to churn?

F3. How would your answer change if the feature was easy to generalise later?

Unlike plain questions where the AI invents follow-ups, blueprints ensure every candidate gets the exact same follow-up questions for fair comparison.

Custom Scoring Rubric

Defines how candidates are scored. Each dimension has a weight that determines its impact on the total score.

DimensionWeightDescription
Communication Clarity12%Does the candidate structure answers crisply, use concrete examples, and avoid hedging?
Relevance of Answers12%Does the answer directly address the question asked, or does the candidate pivot to safer ground?
Technical Knowledge18%Understanding of product-management fundamentals — discovery, prioritization, metrics, lifecycle
Problem-Solving14%Ability to reason through ambiguous scenarios, weigh trade-offs, and defend conclusions under pressure
Role Fit14%Match with the actual demands of a senior B2B PM role — writing-first, opinionated, comfortable with ambiguity
Confidence & Presence6%Steady under follow-up probes; acknowledges gaps without spiraling
Behavioral Fit10%Collaboration signals — how the candidate talks about disagreement, conflict, and team credit
Completeness of Answers14%Covers the full question, not only the easy half; volunteers caveats and counter-examples

Default rubric: Communication, Relevance, Technical Knowledge, Problem-Solving, Role Fit, Confidence, Behavioral Fit, Completeness. Auto-adds Language Proficiency and Blueprint Question Depth dimensions when configured.

Interview Settings

Configure duration, language, tone, and additional instructions.

Duration

25 min

Language

English

Template

Competency-Based Screen

Video

Enabled

Tone / Personality

Friendly but structured. The AI keeps the conversation moving and probes politely when answers are vague.

Adjusts the AI's speaking style but never overrides fairness and neutrality rules.

Company Instructions

We are a mid-stage B2B SaaS company with ~80 engineers. Our product is used by RevOps and sales teams. Mention that the role reports to the VP of Product and partners with a Principal Engineer and a Lead Designer.

Injected into the AI's context so it can reference your company naturally and tailor questions to your environment.

Evaluation Notes

Treat written-communication signals (does the candidate structure answers, reference specific metrics, name stakeholders) as positive. Penalise vague generalities and buzzword-heavy answers.

Passed to the scoring engine as additional context when generating scores. Influences how the AI weighs evidence.

Banned Topics / Compliance

Salary negotiation, references, equity package, compensation structure — these are handled separately after the scored round.

The AI already avoids illegal/discriminatory questions by default. Use this for company-specific restrictions.

How AI Screenr filters a 100-candidate funnel

From applications to shortlist: each stage narrows the pipeline using evidence, not gut feel. Numbers below are typical for a mid-volume tech role.

Applications received

All inbound candidates enter the funnel — from job boards, referrals, ATS autoresponders, and direct outreach.

100/100 candidates remaining

Knockout criteria

Hard filters you define — minimum experience, work authorization, location, salary range, language. Candidates who fail are flagged, not auto-rejected.

Must-have competencies

Pass / fail on the non-negotiable skills for the role (e.g. senior React depth for a Senior React Developer). Evaluated live during the interview.

Language proficiency

Optional CEFR assessment (A1–C2) in the language you specify, with a dedicated interview phase. Skipped if not configured.

Rubric-scored interview

8 default rubric dimensions (customizable) score every answer on a 0–100 scale with evidence quotes, evidence-quality labels (Strong / Moderate / Weak / None), and per-dimension confidence.

Ranked shortlist

Top-scored candidates with 4-point recommendation, executive summary, strengths / risks, notable quotes, and coverage summary. Ready for hiring-manager review.

Applications received100
Knockout criteria82
Must-have competencies55
Language proficiency46
Rubric-scored interview22
Ranked shortlist8
Stage 1 of 6100 / 100

Step 3 in action — a realistic AI screening report

Exactly what lands in your dashboard within 2 minutes of the candidate saying goodbye. Every score is backed by transcript evidence, an evidence-quality label, and a confidence value.

Sample AI Screening Report

Alex Morgan

81/100Yes

Confidence: 86%

Recommendation Rationale

Strong senior-PM signal. Alex articulated prioritization trade-offs with specifics — exact metrics, named stakeholders, what got cut and why — across multiple examples. Discovery discipline is genuine: the case of reversing a planned feature after three customer conversations was well-structured and included the anti-narrative (what they expected to hear versus what they actually heard). The visible gap is cross-functional leadership under disagreement; Alex defaults to consensus before pushing, which may be fine or may be a risk depending on the team dynamics. Recommended for the hiring-manager round.

Summary

Five-plus years of B2B SaaS product management with clear area ownership. Strong on discovery and prioritization — concrete examples with metrics and trade-offs. Cross-functional leadership is competent but consensus-leaning; the candidate does not push back early when the team direction is wrong. Writing-first instincts are evident from how answers are structured. Calm under follow-up probes.

Knockout Criteria

B2B ExperiencePassed

Over five years of B2B SaaS work across two companies with clear area ownership. Named specific enterprise-deal scenarios.

TenurePassed

Seven years of product management experience. Comfortably above the 5-year minimum.

Must-Have Competencies

Prioritization JudgementPassed
87%

Multiple concrete examples with named metrics, explicit trade-offs, and what was cut. Defends decisions without hedging.

Discovery DisciplinePassed
82%

Real customer-interview examples — not surveys, not analytics. Distinguished jobs-to-be-done from stated feature requests twice without prompting.

Cross-Functional LeadershipPassed
68%

Works well across eng + design + GTM but tends to build consensus before pushing hard. Likely fine for collaborative teams; a risk in environments that need a strong product voice.

Scoring Dimensions

Communication Claritystrong
9/10 w:0.12

Consistently structured answers with explicit context, decision, and outcome. No filler. Named specific metrics and stakeholders without prompting.

I cut the Phase 2 reporting work because we had 11% adoption on Phase 1 after four weeks — below our 25% bar. The data team wanted to keep going; the tradeoff was between doubling down on adoption vs. shipping a feature that no one was using yet. Anna on analytics and I wrote a one-page decision memo; we killed it.

Relevance of Answersstrong
9/10 w:0.12

Answers hit the specific scenario asked rather than pivoting to safer ground. When pressed, Alex stayed on the question instead of reframing.

You asked about a prioritization call I regret. The honest answer is the customer-success dashboard — we built it, adoption was high, but the exec team never used the weekly digest we also shipped. I should have killed the digest in week two, not week six.

Technical Knowledgestrong
8/10 w:0.18

Strong fundamentals on discovery, prioritization frameworks, and metric instrumentation. Missed a small point on activation-metric definition vs. retention but self-corrected when probed.

For the upgrade prompt I'd use activation defined as 'user completed three saved reports in the first week' as the trigger, not time-based. Activation predicts retention in our data — we know from the month-three cohort analysis.

Problem-Solvingstrong
8/10 w:0.14

Thinks in trade-offs. For the top-10 customer scenario, Alex walked through cost-of-distraction, precedent risk, and three alternatives before reaching a recommendation.

Two engineer-quarters is not the only cost — there's the precedent cost, the integration-debt cost, and the opportunity cost. I'd first explore whether we can deliver 80% of the value as a services engagement; if not, I'd scope it explicitly as a paid, roadmap-committed item with a price that reflects the opportunity cost.

Role Fitstrong
9/10 w:0.14

Writing-first, opinionated, comfortable being specific. Mentioned three decision memos by name. No playbook-thinking — all examples are team-specific.

For hard decisions I write a one-pager: the decision, the trade-offs, who disagrees and why, and the reversibility. It goes to the team before the meeting. People come in ready to decide, not ready to present.

Confidence & Presencemoderate
8/10 w:0.06

Steady under probes. Acknowledged the two regrets without defensiveness. Slight hedging when pushed on the 'disagreement with leadership' scenario.

I haven't had a full-blown disagreement with a CPO-level stakeholder — most of mine have been with GTM leads. I can give you one of those if that's useful.

Behavioral Fitmoderate
7/10 w:0.10

Collaboration signals are positive but consensus-leaning. Several examples showed Alex pushing back only after team direction had already drifted, not earlier.

I raised the concern at the fourth prioritization meeting. In hindsight the second meeting was where I should have raised it — by then we were three weeks in and the sunk-cost argument was already forming.

Completeness of Answersstrong
8/10 w:0.14

Volunteered caveats and counter-examples without being asked. Answered both halves of compound questions — what went well and what did not.

The upgrade-prompt experiment succeeded on conversion but the downstream effect was increased support volume from users who upgraded without understanding the entitlement. That was a miss we should have anticipated.

Blueprint Question Coverage

B1. Usage-based billing upgrade prompt

user segmentation and entitlementsactivation-based trigger choicecopy and CTA strategysuccess + guardrail metric pairingrollback plan and prompt kill criteria

+ Defined activation with concrete behavioural criteria rather than time

+ Paired conversion metric with a support-volume guardrail

- Did not discuss what threshold would trigger pulling the prompt

B2. Top-10 customer asking for non-generalizable feature

cost-of-distraction and precedent framingalternatives exploration (services engagement, scoped commitment)stakeholder communication planwhat changes if the feature could become generalisable later

+ Framed cost beyond headcount — precedent + integration debt

+ Proposed scoped, paid alternative before reaching refusal

- Did not address the counterfactual where the feature could be generalised

Interview Coverage

%

Overall Coverage

Strengths

  • Writing-first instincts — decision memos are a default tool, not a ceremony
  • Prioritization trade-offs backed by named metrics and specific dates
  • Strong discovery discipline — distinguishes jobs-to-be-done from stated asks
  • Volunteers counter-examples and regrets without being prompted

Risks

  • Consensus-leaning under disagreement — pushes back after drift rather than before
  • Limited evidence of CPO-level conflict; all examples are GTM-facing

Notable Quotes

People come in ready to decide, not ready to present.
Two engineer-quarters is not the only cost — there's the precedent cost, the integration-debt cost, and the opportunity cost.
The honest answer is the customer-success dashboard — we built it, adoption was high, but the exec team never used the weekly digest.

Suggested Next Step

Advance to a 60-minute hiring-manager round focused on one 'I disagreed with leadership' scenario and one 'scope-cut defence' case study. Probe for the moments where Alex would push back earlier in the process, not only when consensus has already drifted.

AI Screenr turns a job description into a scored shortlist in four steps. This page walks through the AI interview process end-to-end so you know exactly what happens between clicking "create job" and opening the first ranked report.

  • Step 1: Configure the role (one click or ~5 minutes manual)
  • Step 2: Candidate completes the voice interview async (typically 15–25 minutes)
  • Step 3: AI scores and summarises (under 2 minutes per candidate)
  • Step 4: You review the ranked shortlist

No ATS integration required. Works with any existing hiring pipeline.

Try the full AI interview process with 3 free interviews →

Step 1 — Configure the Role

You have two paths into a live interview link. Most teams use the AI-generated path.

Option A — One-Click AI Configuration

Paste a job description (internal or public, any length under 10,000 characters). AI Screenr extracts and populates:

  • Title, description, role brief, job family, interview template
  • Required skills and preferred skills
  • Must-have competencies with required levels (basic / intermediate / advanced / expert)
  • Knockout criteria (minimum experience, work authorization, salary range, language, anything you wrote into the JD)
  • Custom interview questions
  • Up to 5 structured question blueprints — each with must-cover topics, follow-up prompts, and strong / weak answer indicators so the AI knows what a great answer sounds like for this specific role

You review the draft, adjust anything that does not match your actual bar, and save. Typical time: 30 seconds to a minute.

Option B — Manual Configuration

Prefer to build from scratch? The form walks you through every field directly. Allow about 5 minutes for a new role, faster once you have a template to clone.

What Gets Configured

Either path produces the same output:

  • Core interview questions — typically 6–10 main areas mapped to rubric dimensions.
  • Question blueprints — must-cover topics + follow-ups + answer indicators per question.
  • Follow-up depth — how hard the AI pushes on shallow answers, configurable per dimension.
  • Knockout criteria — flagged, not auto-rejected; you decide the next step.
  • Rubric dimensions — 8 default (fully customizable) plus a 9th language-proficiency dimension when the interview is non-English.
  • Language + CEFR target — interview language (57 supported) and whether language proficiency is being assessed (A1–C2).
  • Interview duration — 5 to 60 minutes, typically 15–25.
  • Video recording — optional, opt-in per role.
  • Link expiration — how long the link remains active for candidates.

For concrete role examples see React Developer, Backend Developer, and Sales Manager — each page shows a filled-in configuration and sample report.

Step 2 — Candidate Completes the Voice Interview

This is the only step that involves the candidate. Everything upstream is recruiter-side, everything downstream is automated.

What Candidates See

  1. The link. One URL, sent via your usual channel — ATS auto-reply, email, SMS, job-board message. No account creation, no app install, no scheduling page.
  2. Consent and mic check. Before recording begins, the candidate sees a consent screen, grants microphone access, and completes a 10-second mic check.
  3. Greeting. The AI introduces itself, explains the interview flow, confirms the role being interviewed for, and answers "what happens with this recording?" in plain language.
  4. The conversation. The AI asks questions, listens, and adapts. Strong answers get acknowledged and pushed deeper; shallow answers get follow-up probes. Candidates can ask the AI to repeat a question, take a pause, or ask clarifying questions mid-conversation.
  5. Close. The AI wraps up, offers the candidate a chance to ask anything, and confirms what happens next.

What Makes the Interview Fair

Every candidate for the same role gets the same rubric. The AI tailors follow-ups to each candidate's specific answers, so no two transcripts are identical, but all transcripts are scored against identical criteria. The candidate cannot tell from the interview alone whether they are doing well or poorly, which reduces performance anxiety and produces more honest signal.

Timing and Completion

  • Typical duration: 15–25 minutes (you can configure 5–60 per role).
  • Completion rate: 80–90% — significantly higher than one-way recorded video because there is real interaction.
  • If the candidate gets disconnected, they can resume from the same link for up to 24 hours; the interview picks up where it left off. Partial interviews are flagged in the report.

For the async-hiring framing of this flow, see our async interview software page.

Step 3 — AI Scores and Produces the Report

Within 2 minutes of the candidate saying goodbye, a structured report is ready in your dashboard. Here is exactly what lands in the report:

Top of the Report

  • Overall score — 0–100 weighted aggregate across all rubric dimensions.
  • 4-point hiring recommendationStrong Yes / Yes / Maybe / No.
  • Overall confidence — 0.0 to 1.0 score reflecting how much evidence the AI had to work with.
  • Executive summary — 2–3 sentence TL;DR for hiring managers who only read the top.

Dimensional Scores

Each rubric dimension shows:

  • Score (0–10, then weighted) with a 1–2 sentence rationale
  • Evidence-quality label: Strong / Moderate / Weak / None
  • Confidence value per dimension (0.0–1.0)
  • Evidence snippets — direct transcript quotes that support the score
  • Linked questions — which interview question(s) produced the evidence
  • Missing evidence notes — what the rubric expected but the transcript did not reveal

Knockout and Must-Have Results

If you defined them in Step 1:

  • Knockout results — triggered / assessed flags plus evidence for each criterion.
  • Must-have competency results — pass / fail plus evidence per competency.

Summary Blocks

  • Strengths — 3–5 bullets of what stood out.
  • Risks — 3–5 bullets of what raised concerns.
  • Notable quotes — the most interesting lines the AI pulled from the transcript.
  • Suggested next step — a recommendation tuned to the score and evidence quality.
  • Coverage summary — what fraction of custom questions, competencies, knockouts, and blueprints were actually covered by the candidate's answers.

Transcript and Recording

  • Full transcript of the conversation, searchable and time-stamped.
  • Audio recording by default; video recording if you enabled it for this role.

Scoring uses the rubric version saved at the time of the interview. If you tune the rubric mid-pipeline, earlier interviews keep their original scores and the new rubric applies from that point forward — you see clean version history instead of silent recomputation.

Every role page has a sample report. Spot-check a few: QA Engineer, DevOps Engineer, Software Engineer, Product Manager.

Step 4 — You Review the Ranked Shortlist

The dashboard ranks candidates by overall score with knockouts surfaced up front. A typical review cycle:

  1. Scan the ranked list. Top 20% usually deserve a closer look. Knockout-triggered candidates drop to the bottom.
  2. Open the top reports. Read the 2–3 sentence summary first, then the strengths / risks bullets, then skim the evidence the AI flagged.
  3. Decide. Advance, reject, or flag for follow-up. Bulk-action the obvious rejects. Keep the strong-Yes tier for the hiring manager.
  4. Share with hiring managers. One click gives them a share link, a PDF, or a paste-able summary for Slack / email — no AI Screenr account needed.

Recruiter time per candidate drops from 25–45 minutes of live call plus notes to roughly 5 minutes of reviewing a structured report. For concrete math on the hours saved at typical volumes, see replace screening calls.

Timing at a Glance

StepActorTypical time
Configure role (AI one-click)Recruiter30–60 seconds
Configure role (manual)Recruiter~5 minutes
Candidate interviewCandidate15–25 minutes (configurable 5–60)
AI scoring and report generationAutomatedUnder 2 minutes per candidate
Review report per candidateRecruiter~5 minutes

For a 50-candidate funnel, recruiter time drops from roughly 25 hours (live screens + notes) to roughly 5 hours (report review), freeing 20 hours per week for higher-leverage work.

Example Reports by Role

AI Screenr covers every job category — from software engineering to healthcare, retail, construction, and hospitality. Each role page has a sample interview report so you can see exactly what lands in your dashboard. A selection below spanning technical, specialist, and service-sector roles:

RoleCategory
Backend DeveloperTechnology
UX DesignerDesign
Data AnalystTechnology
Financial AnalystFinance
RecruiterHR
ParalegalLegal
Real Estate AgentReal Estate
Construction ManagerConstruction
Production ManagerManufacturing
VeterinarianVeterinary

Or browse all 960+ role-specific AI interview guides by category.

Security & Privacy Through the Interview Flow

Each step of the AI interview process has a defined data-handling contract. Consent is captured before any recording starts — candidates see an explicit consent screen covering what is recorded, how it is used, and who can see it. Audio and transcripts are stored in-region (EU hosting is available for GDPR-sensitive pipelines) with configurable retention windows per role, after which data is automatically purged. Candidates can request deletion of their data at any time via a self-service flow, and we publish a Data Processing Agreement on request. On the hiring side, only authenticated users in your workspace can view reports; shared report links can be scoped to expire automatically. For the full security and compliance posture, see the Security, Privacy & Compliance section on the AI interview software page.

Ready to Try?

Start with 3 free interviews, no credit card. Configure your first role in a minute (one click) or 5 minutes (manual) and see your first scored report the same day.

Share:

Frequently Asked Questions

How long does it take to set up AI Screenr?
Under a minute with one-click AI-generated job configuration — paste a job description and the AI produces the full setup (skills, knockouts, rubric weights, up to 5 structured question blueprints). If you prefer manual setup, expect about 5 minutes. Everything is editable after launch.
What does a candidate see in an AI Screenr interview?
Candidates click the interview link on any device, accept consent, run a 10-second mic check, and start a real voice conversation with the AI. No app install, no account creation, no calendar slot. The AI greets them, explains the flow, asks questions, and adapts follow-ups based on each answer. Total time is typically 15–25 minutes (configurable 5–60 per role).
How quickly do I get scores after an interview?
Under 2 minutes from candidate completion to scored report in your dashboard. The AI transcribes the audio, scores 8 default rubric dimensions (plus a 9th language-proficiency dimension for non-English interviews), attaches transcript evidence to every score, and produces a 4-point hiring recommendation.
Can I change the rubric mid-pipeline?
Yes. Edit any question, follow-up, or scoring weight at any time. Already-completed interviews keep their original scores — the rubric version is saved with each report, so you see clean version history instead of a silent recomputation.
How do I share AI Screenr reports with hiring managers?
Every report has a share link, PDF export, and a copyable summary block for Slack or email. Hiring managers do not need an AI Screenr account to open the shared report. You can also give them read-only dashboard access if they want to see more than one candidate at a time.
What happens if a candidate goes silent or gets disconnected?
The AI waits, gently re-prompts, and allows pauses without penalising the candidate. If the connection drops entirely, the candidate can resume from the same link within 24 hours — the interview picks up where it left off. Partial interviews are flagged in the report so you can decide whether to re-invite.
Can candidates retake an AI interview?
By default each candidate has one attempt per role to keep the evaluation fair and comparable. You can allow a retake manually if a technical issue occurred; the prior transcript is kept for reference.
How are interviews delivered to candidates?
You copy one interview link and drop it into your usual candidate workflow — ATS auto-reply, recruiter email, job-board message, SMS. No scheduling, no integration project. The link works on any modern mobile or desktop browser.
Does the AI Screenr report sync back to my ATS?
AI Screenr is ATS-agnostic. You can copy the shortlist URL into your ATS, export the PDF, paste the summary block, or use a webhook to push scores into any system that accepts inbound JSON. Dedicated integrations with Greenhouse, Lever, Workable, Ashby, Personio and others are available for teams that want automatic sync.
How is the AI interview scored under the hood?
Each answer is evaluated against 8 default rubric dimensions — Communication Clarity, Relevance, Technical Knowledge, Problem-Solving, Role Fit, Confidence, Behavioral Fit, Completeness — on a 0–100 weighted scale. Every dimension score carries an evidence-quality label (Strong / Moderate / Weak / None), a confidence value, and direct transcript quotes. A 9th language-proficiency dimension is added automatically for non-English interviews.

See how it works — try 3 interviews free

Start with 3 free interviews — no credit card required.

Try Free