Digital mentoring after the test

Task

Self-report is a feature developed by Medway with the aim of capturing and analyzing student data in relation to medical residency exams.

About the project

The aim is to better understand Medway students’ behaviors and intentions during their medical residency probation periods.

The central proposal is to understand the student’s intention to take the test, verify whether they actually took the test, and evaluate their performance. This data is used to generate insights about the student’s profile and help improve the services offered by the platform, as well as to provide personalized recommendations to users.

  1. Increase student data capture;
  2. Improve student experience by providing personalized feedback based on performance;
  3. Simplify the process of capturing information after exams – Answer Key;
  4. Create personalized study recommendations based on each student’s individual needs.
  5. Customer Acquisition (Phase 2)
Business Discovery

Starting with a focus on strategic alignment and measurable impact.

Every project starts with a clear and well-defined goal. To ensure this alignment, I kicked off this discovery with a strategic workshop designed to establish objectives, identify the OKRs we want to achieve, and define the key metrics that will indicate success before and after implementation. In addition, it is crucial to understand who we are building the functionality for and what problems we are trying to solve.

When starting the project, we faced the complexity of dealing with multiple personas, diverse audiences, and different types of plans used by customers. To address this diversity, I brought together relevant stakeholders — Product Managers, Tech Leads, and others directly involved in the project — for effective and aligned collaboration.

The dynamics used: Opportunity Assessment

Opportunity Assessment is a methodology structured around four fundamental questions that guide the strategic direction of the project. This approach ensures that all decisions are connected to business objectives and client needs. The dynamic can be conducted in workshops or through specific meetings with those responsible for the project.

The four essential questions:

1. What is the business objective of this project?

What results do we hope to achieve by implementing the solution?

2. How will we know we have been successful?

What performance indicators (Key Results) will we monitor and how do we expect to impact them?

3. What customer problem are we solving?

What are the specific customer pain points or difficulties that the product should solve?

4. Who is our target market?

Who is our ideal customer, considering demographic context, segment and life stage?

Based on data and evidence

The answers to these questions should be supported by concrete data, such as product metrics analysis, test results, qualitative and quantitative research, or market insights. This foundation strengthens decision-making and increases the likelihood of project success.

We usually do this workshop when the project reaches the designer without many definitions.

Result of the Opportunity Assessment Workshop.

What is the business objective of this project? (objectives)

The main goal of the self-report is to gain a deeper understanding of Medway students’ behaviors and intentions during their residency exam periods. To achieve this, we aim to:

  1. Increase student data capture;
  2. Improve student experience by providing personalized feedback based on performance.
  3. Simplify the process of capturing information after exams - Answer Key.
  4. Create personalized study recommendations based on each student's individual needs.
  5. Customer Acquisition (Phase 2)
How do we know if we were successful? (Key results)

The success of this project will be measured by:

  1. Number of students who submitted and answered the answer sheet
  2. CSAT to measure overall satisfaction with the flow and result of the answer sheet
What customer problem do I want to solve?

Students seek clear and personalized guidance before and after exams. In addition to the great anxiety that students have at the time of the exam, during and after the exam. At the beginning of the project, we were certain that students wanted to know how their performance compared to their competitors. This was refuted in the qualitative research we conducted.

What is our target market? (single target market)

Clients who are in the process of taking the test

Project risk assessment stage.

Another important thing to measure is the risks involved in the project. It is very important to identify the risks during the discovery stage so that there are no surprises.

This workshop, called Risk Assessment, does not have an order of “when to do it”, but it is important to have it before the ideation stage. The risk assessment framework contains 5 questions and aims to encourage discussion about the risks of an initiative so that they can be mitigated by the team during the product discovery process.

To start the workshop, simply answer the 5 questions about the initiative: (from highest to lowest risk)

Would customers buy this product? (Value risk)

Can customers use the product? (Usability risk)

Is the product aligned with the bank’s purpose/strategy? (Feasibility risk)

Can the product be built? (Technological risk)

Does the product need to be built? (Ethical risk)

Initially filled with secondary data, it should be revised and re-discussed at the end of each experiment carried out by the team. If the evidence is inconclusive (we do not know), it will be necessary to carry out experiments to generate more evidence. Prioritize the greatest open risk. Prepare a design sprint to address the issue, and include the new evidence in the canvas.

Risk Assessment Workshop

Risk Assessment Workshop Results

Would students upload their answer sheets? (Value risk)

Main risks identified:

  1. The group had doubts about the perceived value of students uploading their answer sheets;
  2. The process may be considered time-consuming or complicated, leading to students dropping out;
  3. Students may be afraid of exposing their performance data;
  4. The lack of clarity about the gain for the student when uploading can be a risk.
Can the product be built? (Technological risk)

The main points identified were:

  1. Recognition of responses by image can be problematic;
  2. Difficulties in the integration between the MedWay Radar product and the platform;
  3. Conflicts between teams (e.g. prioritization of tasks)
What customer problem do I want to solve?

Students seek clear and personalized guidance before and after exams. In addition to the great anxiety that students have at the time of the exam, during and after the exam. At the beginning of the project, we were certain that students wanted to know how their performance compared to their competitors. This was refuted in the qualitative research we conducted.

Customer Problem

Quantitative research to understand the user better.

Qualitative and quantitative research methodologies were applied to understand user expectations:

They are conducted with students who use the platform to understand their behavior during the exam period.

Research plan: Survey – About exam time.

The objective of this research is to explore the behavior of Medway students during the exam period, understanding their motivations, challenges, and how the platform can better serve them. The research focuses on discovering subjective aspects, such as quantifying feelings, frustrations, and expectations of users regarding exams and the use of the platform.

Question flow

The flow was designed in a way that if the student answers that they have never taken an exam, we will not display questions related to the exam. Let’s go to the end of the form.

How do you usually feel on test day?

Overall Results:

50.7% (189 responses): Students reported feeling “nervous” on test day.

25.2% (94 responses): They felt “neutral.”

15.3% (57 responses): They felt “confident.”

7.8% (29 responses): They reported feeling “very nervous”.

1.1% (4 responses): They reported feeling “very confident.”

 

Insights:

  1. Prevalence of Nervousness:
    More than half of students (50.7%) report feeling nervous on test day, which, along with the 7.8% who feel very nervous, highlights a clear pattern of anxiety. This data indicates that emotional preparation and stress management are critical aspects for students during tests.
  2. Emotional Impact:
    Only 16.4% of students rated themselves as “confident” or “very confident,” which suggests that most students face considerable emotional challenges at this time. This data may indicate an opportunity to provide resources that help increase students’ confidence before tests, such as content focused on last-minute review or tips on emotional control.
  3. Need for Support:
    A significant number of students (25.2%) feel “neutral,” which may represent an opportunity for intervention, as these students may benefit from motivational resources or encouragement to increase confidence before the test.

How many medical residency attempts have you made in the last year?

Overall Results:

33.5% (125 responses): Have not taken any tests yet.

30% (112 responses): Have taken between 4 and 6 tests.

27.3% (102 responses): Have taken between 1 and 3 tests.

6.7% (25 responses): Have taken between 7 and 9 tests.

2.4% (9 responses): Have taken more than 9 tests.


Insights:

  1. Beginning Student Segment (33.5%):
    A third of respondents have not yet taken an exam, indicating that they are in the early stages of preparing or choosing an exam. This demographic may be looking for introductory content on the exam process and how to prepare effectively.

  2. Experienced Group (30% + 27.3%):
    The majority of students (57.3%) have taken between 1 and 6 exams in the past year, representing a large portion who are actively preparing for multiple exams. These students are familiar with the registration process and have experience planning for consecutive exams.

  3. High Engagement (6.7% + 2.4%):
    Students who have taken between 7 and 9 exams (6.7%) or more than 9 exams (2.4%) are a minority, but they are highly engaged and likely to use study tools extensively. These students may value tools that help them better organize their exam schedule and provide advanced support.
    1.  
    1.  
  1.  

How do you usually check if you did well on the test?

Overall Results:

54% (134 responses): They check their results using the official answer key.

44.8% (111 responses): They use the preliminary answer key released by the institution.

38.7% (96 responses): They follow live sessions of the post-test results.

10.1% (25 responses): They participate in discussions with other students or teachers.

7.7% (19 responses): They do not usually check their performance.

5.6% (14 responses): They check their answers on their own after the test.

0%: Other methods were not mentioned.

 

Insights:

  1. Reliance on Official Answer Key (54%):
    More than half of respondents rely on the official answer key to check their performance, showing a high level of trust in information provided directly by the institution. This suggests that most students wait for this final result to assess their performance.

  2. Use of Preliminary Answer Key (44.8%):
    Almost half of students also use the preliminary answer key for an initial assessment. This suggests that many want a quick correction and are willing to check their results before the official one.

  3. Engagement with Post-Exam Lives (38.7%):
    Post-exam live assessments are a significant practice among students, with 38.7% using this resource. This indicates an opportunity for Medway to reinforce this format, possibly by offering its own live sessions or encouraging live discussions.

  4. Informal Discussions (10.1%):
    A smaller but relevant group participates in discussions with other students or teachers. This highlights the importance of interaction and information exchange after the exam, which may suggest features aimed at creating discussion groups or forums within the platform.

  5. Need for Automation and Support (7.7% and 5.6%):
    A small number of students do not check their results or do so manually. These groups could benefit from an automated feature that offers fast and reliable analysis within the platform, eliminating the need to manually review or fail to check performance.

After taking a test, what do you actually do?

Overall Results:

37.5% (93 answers): They analyze what they got right and wrong on the test to improve.

34.7% (86 answers): They take a break and then resume studying.

19.8% (49 answers): They relax for a while before thinking about the next test.

7.3% (18 answers): They start studying for the next test immediately.

0.8% (2 answers): Other actions, such as short breaks followed by analysis.

 

Insights:

  1. Analysis of Successes and Mistakes (37.5%):
    Most students are concerned with analyzing their successes and mistakes as a strategy for continuous improvement. This suggests that many are committed to learning from their mistakes and identifying gaps in knowledge to improve their performance on future exams.

  2. Taking a Break Before Resuming Studies (34.7%):
    A large proportion prefer to take a break before returning to studying. This behavior reflects the need for mental rest after a stressful exam period, which may indicate that students value a balance between effort and rest.

  3. Relaxation (19.8%):
    Almost a fifth of students prefer to relax for a while before thinking about upcoming exams. This suggests that, for some, it is important to disconnect and recharge their batteries before entering another cycle of studying.

  4. Immediate Study (7.3%):
    A small percentage start studying immediately for upcoming exams, which suggests a highly dedicated group with a continuous focus on preparation. This group may benefit from immediate post-exam support, such as quick insights for next steps.

What do you see most value in receiving from Medway after a test?

Overall Results by Ranking:

 

Personalized study recommendations based on your mistakes (2.6 average):

30% (113 responses) ranked it as the most valuable feature.

24% ranked it as 2nd.

9% or less ranked it as the last feature.

 

Guidelines to prepare for upcoming exams (3.14 average):

23% ranked it as the most valuable feature.

The majority of respondents (over 50%) ranked it between 2nd and 4th.

 

Detailed correction based on the official preliminary answer key (3.14 average):

While only 17% ranked it #1, the majority ranked it in their top three.

25% ranked it #3, showing that it is still a high-value feature for many.

 

Early correction made by Medway teachers (3.32 average):

24% of students ranked this feature as the top choice, highlighting their interest in getting early feedback before the official answer sheet.

Ratings vary, but students still see value in getting quick, early corrections.

 

Suggestions on how to appeal specific issues (4.17 average):

26% ranked this feature 6th, indicating that it has the least perceived value.

Only 9% saw it as the most important feature.

 

Comparison of your performance with other students (4.63 average):

44% rated this feature as the least valuable, indicating that comparison between students is not a priority for most.

Only 5% rated it as the most valuable feature.

Key research findings

Behavior during tests


Prevalence of Nervousness:
More than half of students (50.7%) report feeling nervous on test day, which, together with the 7.8% who feel very nervous, highlights a clear pattern of anxiety. This data indicates that emotional preparation and stress management are critical aspects for students during tests.

 

Emotional Impact:
Only 16.4% of students rated themselves as “confident” or “very confident,” suggesting that most students are facing significant emotional challenges right now. This could indicate an opportunity to provide resources that can help boost students’ confidence before exams, such as content focused on last-minute review or tips on emotional control.

 

How do students know they did well on the test?


Reliance on Official Answer Key (54%):
More than half of respondents rely on the official answer key to check their performance, showing a high level of trust in information provided directly by the institution. This suggests that most students wait for this definitive result to assess their performance.

Uso do Gabarito Preliminar (44.8%):
Quase metade dos alunos também se utiliza do gabarito preliminar para uma avaliação inicial. Isso sugere que muitos querem uma correção rápida e estão dispostos a verificar os resultados antes do oficial.

Engagement with Post-Test Lives (38.7%):
Post-test live sessions are a significant practice among students, with 38.7% using them. This indicates an opportunity for Medway to reinforce this format, possibly by offering its own live sessions or encouraging live discussions.


After the test, what do you actually do?


Analysis of Successes and Mistakes (37.5%):
Most students are concerned with analyzing their successes and mistakes as a strategy for continuous improvement. This suggests that many are committed to learning from their mistakes and identifying gaps in knowledge to improve their performance on future tests.

 

Taking a Break Before Resuming Studies (34.7%):
A large proportion prefer to take a break before returning to studying. This behavior reflects the need for mental rest after a stressful period of exams, which may indicate that students value a balance between effort and rest.

 

Relaxation (19.8%):
Almost a fifth of students prefer to relax for a while before thinking about upcoming exams. This suggests that for some, it is important to disconnect and recharge before embarking on another study cycle.

 

Immediate Study (7.3%):
A small percentage begin studying immediately for upcoming exams, suggesting a highly dedicated group with a continued focus on preparation. This group could benefit from immediate post-exam support, such as quick insights into next steps.

 

What is the most valuable feature for the template?


Overall Results by Ranking:

 

Personalized study recommendations based on your mistakes (2.6 average):

30% (113 respondents) ranked this as the most valuable feature.

24% ranked it as the 2nd most valuable feature.

9% or less ranked it as the last.


Guidelines to prepare for upcoming exams (3.14 average):

23% ranked it as the most valuable feature.

The majority of respondents (over 50%) ranked it between 2nd and 4th.


Detailed correction based on the official preliminary answer key (3.14 average):

While only 17% ranked it #1, the majority ranked it in their top three.

25% ranked it #3, showing that it is still a high-value feature for many.


Early correction made by Medway teachers (3.32 average):

24% of students ranked this feature as the top choice, highlighting their interest in getting early feedback before the official answer sheet.

Ratings vary, but students still see value in getting quick, early corrections.


Suggestions on how to appeal specific issues (4.17 average):

26% ranked this feature 6th, indicating that it has the least perceived value.

Only 9% saw it as the most important feature.


Comparison of your performance with other students (4.63 average):

44% rated this feature as the least valuable, indicating that comparison between students is not a priority for most.

Only 5% rated it as the most valuable feature.

Margin of Error

User flow

To understand how our solution would fit into the current journey, we created a flowchart with the squad to understand how we could design the solution.

The following was defined:

User Flow

  1. Step 1: The student answers an initial test intent question via a toast.
  2. Step 2: 7-1 days before the test, a notification asks if the student is prepared.
  3. Step 3: The student takes the test and fills out the answer key.
  4. Step 4: The student uploads the answer key and receives personalized feedback on correct answers, errors, and comparisons with the competition.
  5. Step 5: The platform suggests specific studies based on the student’s weaknesses, creating a study plan for the next test.

Operations flow

We held a meeting with Rapha to understand how the current flow of operations works during the exam. This meeting was important for us to understand how and when we would receive exam correction data.

Without the data from Drops (the internal area responsible for generating this report), it would not be possible to generate the smart answer sheet.

UI Solution

Based on our user survey, where 37.5% want to see what they got right and wrong, we built it like this.

 

Wireframe

Every project starts with a clear and well-defined objective. To ensure this alignment, I started this discovery with a strategic workshop designed to establish objectives, identify the OKRs we intend to achieve, and define the key metrics that will indicate success before and after implementation. In addition, it is crucial to understand who we are building the functionality for and what problems we intend to solve.

When starting the project, we are faced with the complexity of dealing with multiple personas, diverse audiences, and different types of plans used by customers. To address this diversity, I brought together relevant stakeholders — Product Managers, Tech Leads, and others directly involved in the project — for effective and aligned collaboration.

The dynamic used: Opportunity Assessment

The Opportunity Assessment is a methodology structured around four fundamental questions that guide the strategic direction of the project. This approach ensures that all decisions are connected to the business objectives and the customer’s needs. The dynamic can be conducted in workshops or through specific meetings with those responsible for the project.

The four essential questions:

1. What is the business objective of this project?

What results do we hope to achieve by implementing the solution?

2. How will we know we have been successful?

What performance indicators (Key Results) will we monitor and how do we hope to impact them?

3. What customer problem are we solving?

What are the specific customer pain points or difficulties that the product should solve?

4. Who is our target market?

Who is our ideal customer, considering demographic context, segment, and life stage?

Based on data and evidence

The answers to these questions should be supported by concrete data, such as analysis of product metrics, test results, qualitative and quantitative research, or market insights. This foundation strengthens decision-making and increases the likelihood of project success.

wireframe - Gabarito inteligente

High fidelity

Based on our user survey, where 37.5% want to see what they got right and wrong. We built it like this.

Results

We use metrics to support us and understand whether the project was successful.

To do this, the CSAT (Customer Satisfaction Score) survey for self-reporting can be structured to directly measure student satisfaction after using the functionality, with an approach that is quick, and objective and that at the same time allows for capturing deeper feedback.

We structured a CSAT via Hotjar on the Smart Answer Key page, that is, at the end of the student’s journey.

  1. Welcome screen (help me if the copy isn’t good)
  2. Question: CSAT (1-7)
  3. If, 1-4 – Question: What can we improve
  4. If, 5-7 – Question: What did you like most?
  5. Question: Is there any functionality you would like to see in the Smart Answer Key?

CSAT

Overall Result:
  1. A predominantly high score (78.3% out of 5) indicates a very positive level of satisfaction with the service or product evaluated.
  2. The absence of intermediate responses (2 and 3) suggests a polarization in opinions or a clear and unanimous experience among users.
Positive Interpretation:
  1. This is an indicator of success, especially since 97.9% of the responses are in the highest grades (4 and 5), which demonstrates a high level of satisfaction.

Main positive highlights

  1. Competitor Comparison and Statistics: Comparing performance to other candidates was mentioned several times as a valuable feature. Users appreciated seeing how they performed against the competition and in specific areas (#1, #2, #4, #7, #9, #13).
  2. Study Targeting and Identifying Weaknesses: Many praised the study targeting feature and the clarity in identifying weakest areas to focus on in the next preparation cycle (#3, #6, #10, #18, #22, #27).
  3. Easy to Use and Speed: The ease of use and speed of analyzing results were mentioned repeatedly, highlighting the tool’s practicality for reviewing errors and tracking performance (#4, #5, #11, #12, #14, #16, #23).
  4. Results Storage and Tracking: The functionality of keeping templates saved and making adjustments based on official templates was well received by those who track their own results over time (#25, #30).

Recurring feedbacks

  1. Division by Area and Analysis by Subject: Many highlighted the possibility of analyzing performance by areas and themes, allowing for more targeted study (#2, #8, #9, #20, #22, #29).
  2. Diagnosis and Cut-off Score: Some mentioned satisfaction with the post-test diagnosis and the inclusion of the cut-off score, which helps them better contextualize individual performance (#26, #31).
  3. Immediate Feedback: The ability to quickly check where they went wrong and receive immediate feedback was mentioned as useful and motivating (#15, #32).

Negative Feedback and Suggestions for Improvement

  1. MedBrain and Repetitive Targeting: One user commented that MedBrain always seems to suggest the same focuses, indicating a possible limitation or lack of variety in the guidance provided (#15).
  2. Difficulty Accessing Post-Race Results: There was a request for post-race results or diagnoses to be more accessible later, as some had difficulty finding the data after the first use (#30).

Most requested features

  1. Percentile and Comparative Ranking: Several users expressed interest in having a more detailed view of where they are positioned in relation to others, such as percentiles or a preliminary ranking, especially among candidates in the same specialty. This helps students better understand their relative position (#2, #4, #8, #12, #27).
  2. Post-Appeal Correction and Adjustments: Some users suggested adjustments to grades as appeals are released, allowing for alignment between the pre- and post-appeal answer sheets (#3, #17).
  3. Error-Based Exercise Guidance: Suggestions for study paths or exercise lists based on the most frequently answered questions were common, allowing for more practical and personalized review (#6, #13, #26).

Suggestions for Improvements in User Experience

  1. Qualitative Analysis of Errors: There were requests to include an option where users can categorize their errors (e.g., lack of content, distraction), which would help identify and act on the types of errors made (#10).

  2. Full Availability and Storage: Some users expressed the desire for the Smart Answer Key to be available for all tests and mock exams, and for the results to be saved in a test tab for easy future access (#20, #23, #28).

  3. Navigation Between Questions: There was feedback that the navigation experience when reviewing questions could be improved, avoiding the need to return to the home screen between questions (#30, #32).

Other features and ideas

  1. Average and Correct Scores of Competitors: In addition to personal performance, some users would like to see the total number of correct scores of competitors and an overall average (#5, #31).
  2. Contextual Resources: It was suggested that questions with ready-made resources be flagged, making it easier to review these specific questions (#16).
  3. Suggestion of Content Based on Errors: Allowing direct access to content related to the topics where the student made the most mistakes was a suggestion, reinforcing the continuity between the analysis of results and the review material (#33).
Engajamento results

Adhesion metrics

 

48% adoption of Self Report (declaration of which tests they will take)

1800 users indicated which tests they will take

3800 users answered some question

 

On average, each student declared that they will take 5.8 tests, (matching the result of the quantitative survey at the beginning of discovery)

.

*this data was collected at an earlier time, so the results may be better

The smart answer key feature showed good engagement in the first two tests. There are signs to refute the initial hypothesis that the effort required to upload the answers to the answer key implied low engagement.

The results may be correlated with communication efforts; to truly prove the value of the feature, it will be necessary to continue monitoring adherence in the next tests and gather feedback from those who tried it.

Back