Learning Development and Success

  • Individual Counselling
  • Learn2Thrive
  • Learn2Learn
  • Mindfulness and Learning
  • Live Presentations
  • Course Help Schedules
  • Course-Specific Resources
  • Assess Your Skills
  • Study Groups
  • Math & Science Resources
  • Fillable Calendars
  • First Year Students
  • International Students
  • Graduate Students
  • Medical Students
  • Western Athletes
  • Faculty and Staff
  • Presentations
  • Peer Assisted Learning
  • Information For...
  • Time Management Self-Assessment
  • Academic Success Checklist

Book an Appointment

Disappointing Grade?

Are you a good time manager? Respond to the following inventory. Indicate Yes if the statement applies more often than not. Choose No if the statement does not apply most of the time.

How did you do?

Want to become a better time manager.

Learning Development counsellors can help Western students learn and practice effective time management skills. For help setting goals, creating schedules, or monitoring progress, meet with one of the counsellors for an individual appointment . You can print or download 4-Month Term calendars or a weekly calendar from here, the main page .

  • Download Time Management Self Assessment pdf

Learning Development and Success Room 4100, Western Student Services Building London, Ontario, Canada, N6A 3K7 Tel: 519-661-2183 Privacy | Web Standards | Terms of Use | Accessibility

Confidentiality Statement

Accessible Education

Writing Support Centre

Wellness and Well-being

Office of the Ombudsperson

Send Us Feedback

Multi-Tasking: 40 Useful Performance Feedback Phrases

Multi-Tasking: Use these sample phrases to craft meaningful performance evaluations, drive change and motivate your workforce.

Multi-Tasking allows one to juggle and perform more than one task at a time without losing track of what you are working on or dropping the ball.

Multi-Tasking: Exceeds Expectations Phrases

  • Evaluates when multitasking is necessary; whether it is going to help one get more work done or it is only going to result to one doing multiple tasks slowly or badly
  • Sets aside time for intense or complex tasks that require one's full concentration
  • Chooses tasks that are fluent, routine, and familiar to multi-task or those that do require one's full focus to accomplish
  • Selects one's tasks, with a general sequence of events in mind, in order to complete them without needless repetition or redundancy
  • Starts the more involved or longer tasks first, and fills in the gaps with the shorter, self-contained or well-defined tasks
  • Thinks about whether there are resources to manage or distribute other than own time and attention
  • Works ahead; starts early to set up and prepare when one knows there will be a big rush
  • Allows extra time for interruptions when planning for how long one expects everything else to take
  • Sets a timer or makes a mental note to remind oneself to pay attention to a task when one gets distracted
  • Eliminates unnecessary tasks from one's plan in order to be more efficient

Multi-Tasking: Meets Expectations Phrases

  • Makes a list of things one need to refer to often and puts it next to one's computer for quick access
  • Chooses tasks that can be paused easily especially when one's multitasking involves dealing with multiple interruptions
  • Keeps a selection of simpler projects or smaller tasks and performs them while waiting for inspiration or information on a larger project
  • Uses waiting time efficiently; always has a portable task to do such as reading especially in places where one anticipates waiting
  • Takes breaks when one needs them in order to balance rushes and refresh one's mind for tasks that require intense focus
  • Posts one's to-do list in a prominent place spot in order to remind oneself what really needs to get done
  • Simplifies tasks that one cannot eliminate such as routine tasks and tries to perform them in as much detail as they require
  • Pauses tasks at natural points, such as the end of a page, and does what one needs to, to remind oneself to resume
  • Chooses compatible tasks such as reading a book and clearing your desk and does them together
  • Shifts multitasking to single-tasking throughout the day to allow one's mind to reboot

Multi-Tasking: Needs Improvement Phrases

  • Waits until one is already in the middle of a task to decide what else one needs to accomplish
  • Does not know how to differentiate between tasks that need one's complete attention and those that work well with multi-tasking
  • Often loses focus and track of tasks when presented with multiple tasks that demand one's full attention at work
  • Spends most of one's time on tasks that do well with multi-tasking and forgets the most important tasks on one's job description
  • Finishes one task and moves to the next but still thinks about the task one has just finished
  • Starts thinking of other things that one needs to do thus is not able to give full attention to the task at hand
  • Does not split up the steps for each task or create mini-deadlines for easier management thus fails to complete tasks on time
  • Does not cluster tasks and subtasks to how, what, and where one needs to complete them thus one is unable to differentiate between online and offline tasks
  • Does not ask for help or advice from colleagues when one is stuck thus wastes valuable time trying to figure things out
  • Does not take breaks in between tasks to reassess new information or let one's mind rest

Multi-Tasking: Self Evaluation Questions

  • Describe a time when you have had to perform multiple tasks at the same time. What are some of the challenges you faced?
  • What effect does handling many things simultaneously have on you? When is the last time this happened?
  • What system do you use to keep track of multiple projects? How has this helped you meet deadlines?
  • Describe a situation that required you to do a number of things at the same time. How did you handle it?
  • How do you prioritize your tasks to make sure that all are attended to and that they meet deadlines?
  • What are some of the ways and techniques that you have found to make handling of multiple tasks easier and more effective?
  • What is the most difficult multi-tasking experience you have ever had? What did you do and what was the outcome?
  • Are there times when you have been interrupted while multi-tasking? What happened? What have you done to reduce or avoid interruptions?
  • What tips have you used to differentiate tasks that fit well in multi-tasking from those that require your undivided attention?
  • What are some of the resources you have used for effective multi-tasking other than your own time and attention?

These articles may interest you

Recent articles.

  • Outstanding Employee Performance Feedback: Regional Command Center Operator
  • Employee 6 Month Review Template With Sample Text
  • Poor Employee Performance Feedback: Office Operations Associate
  • Outstanding Employee Performance Feedback: Corporate Tax Manager
  • Top 10 Employee Referrals Program Benefits
  • Employee Performance Goals Sample: Accounts Payable Associate
  • Good Employee Performance Feedback: External Audit Senior
  • Poor Employee Performance Feedback: Credit Investigator
  • Training others: 40 Useful Performance Feedback Phrases
  • Employee Performance Goals Sample: Research Fellow
  • Analytical Skills: 15 Examples for Setting Performance Goals
  • Skills needed to be a resource conservation specialist
  • Poor Employee Performance Feedback: Engineering Support Technician
  • Appearance and Grooming: 40 Useful Performance Feedback Phrases
  • Good Employee Performance Feedback: Home Office Claim Specialist


Have an account?

Suggestions for you See more

Quiz image

First Amendment

Citizenship, 43.7k plays, responsibilities of citizens, career research, 8th -  10th  .


Task 5 Quiz

Social studies.

User image

20 questions

Player avatar

Introducing new   Paper mode

No student devices needed.   Know more

Using stories to bring about deeper understanding is an example of which facet of understanding?



Which characteristic of high quality assessment addresses the question, “Would the same outcome occur across administrations, over time or across raters?”



Which of the following is most true about the use of standardized tests?

They are effective when used for formative assessment.

They are administered by people with some form of training in standardized procedures.

Standardized tests are subjective in nature.

They are used mostly for progress monitoring with generally little at stake beyond the classroom.

Which of the following assessment methods work when teachers are trying to evaluate “enduring understanding” in their students?

Informal Checks for Understanding

Academic Prompts

Performance Tasks and Projects

Both B and C

Which of the following is a characteristic of a test or a quiz?

They are most appropriate for assessing enduring understanding of concepts.

They tend to be administered in an informal way.

It is nearly impossible to make them secure, most students know the answers to question before administration.

They are convenient to administer and score.

Assessment decisions at the school level are important in order to fulfill these types of needs:

To know if students have a purpose for learning a specific lesson.

Federal and state policy

Staff training and curriculum needs

Individual progress monitoring and feedback

According to Understanding by Design theory, assessment and evaluation involve:

determining acceptable evidence of understanding

decisions made in the final stages of planning

is usually considered separately from learning objectives to increase validity.

should always involve standardized tests.

According to Bloom, critically examining information and making judgments are processes involved at which category of higher order thinking?



According to Backward Design theory, performance tasks and projects promote enduring understanding because:

They can only be graded by teachers

They only allow for one true answer

They are open ended, complex and authentic

They use formulas and algorithms

At the classroom level assessment has which of the following purposes?

To inform Federal and State decisions on educational policy

To give students a purpose for learning

To determine staff training needs

To see if students with disabilities are meeting district goals

Which of the following is not part of the assessment process?

Establish learning objectives

Using a quiz consistently year after year

Match instruction to objectives and assessment

Reflect on assessment

Summative assessment:

Tends to be informal measures like observations or dialoguing with a student to check for understanding

Standardized tests for college admissions like the SAT or ACT

Should always involve some objective test.

Occurs at the end of a unit and provides students an opportunity to demonstrate learning and understanding.

This assessment method is formative, informal, informs instruction and occurs frequently.

Checks for Understanding

Tests and Quizzes


Which of the following is not important when developing effective learning objectives?

Identify desired results

Use objectives to provide an anticipatory set

Determine reading levels of students

Creating opportunities for self-reflection

Which of the following is true about Common Core standards

The federal government has published curriculum to accompany the Common Core Standards.

Teachers are discouraged from teaching novels and fiction in favor of informational text.

The Obama Administration mandated that all states adopt the Common Core Standards

States must administer an approved national test

Which of the following groups was not given a direct say in the development and adoption of Common Core Standards?

Federal Government

Foundations and Corporations

Which of the following was a positive goal for the development of Common Core Standards?

To have common definitions of proficiency in each grade in every state.

Emphasis on "fewer, higher, and deeper"

To develop a new way to teach math

Both A and B

Which of the following are aspects of Bloom's Taxonomy?

Not all thinking is equal

There is a continuum of thinking with the stages ordered 1 through 10

Each type of thinking is different and distinct from the other types.

It is unrelated to learning objectives and how to assess these objectives.

Backward Design was developed by

Piaget and Vygotsky

Sternberg and Gardner

Wiggins and McTighe

Which of the following is not an aspect of understanding



Explore all questions with a free account

Google Logo

Continue with email

Continue with phone

Time-on-Task Evaluation ©1999 Eileen Bonine email

It has been demonstrated that increased time spent on learning activities yields increased learning, provided that the teacher is competent and the learning activities are effectively designed and implemented (Brophy, 1988). This hopefully is no surprise to anyone. Two elements of time spent, as described by Levin & Nolan (1996), are time allocated to teaching a subject, and the students' time spent actively engaged in learning. The concept of "time-on-task" has been derived as a measure of the latter variable. Few teachers would argue that one of their primary objectives is to keep the class on task as much as possible, particularly since the time allocated to teaching is compromised by administrative needs, announcements, and other interruptions. A method for assessing time-on-task in a classsroom setting will be discussed.

When evaluating a class for "time-on-task" one is asked to scan the classroom, noting and recording individual student behavior at regular time intervals. The experience and skill level of the observer will help determine the observation interval, but an interval of approximately 5 seconds between observations is suggested. To avoid bias, the observer must make a random sampling of student behavior by selecting students from all areas of the classroom. A sampling plan, indicating an order in which to observe individual students, is provided to assist the observer.

Students are judged to be on-task, misbehaving, or doing nothing. The observer selects one of these three descriptions of the student's behavior and records either a letter T (on-task), a letter B (misbehaving), or nothing (not on task, not misbehaving). At the end of the observation session, the data are tallied and a percent time-on-task score is assessed. In order to accurately assess time-on-task, the observer must be able to clearly distinguish between these three behaviors. In certain learning situations, this may be fairly difficult to ascertain. When a student is sitting quietly, who can really determine whether or not he is on task? If the student is thinking about or processing the subject material, formulating a question or an answer, or simply listening and absorbing, he may be judged to be doing nothing when he is in fact on-task and actively learning. The five-second sampling interval requires the observer to make a snap decision without benefit of careful study.

The calculation of time-on-task is made by dividing the number of on-task observations by the total number of observations. Should the "nothing" data points be excluded from the total? This bears careful consideration. The number of these null points, of course, has a bearing on the decision. A data set with very few null points will not be greatly affected either way, but a large number of null points can sway the on-task percentage significantly. If the objective of the evaluation is to determine time spent effectively on learning activities, and the observer confidently assigns the null value to mean "not on task, not misbehaving", then the points should be included. Excluding them will give a falsely high on-task rating. If the observer cannot confidently determine that the student is not on task, the points should be excluded.

I had the opportunity to practice the time-on-task observation process in a recent teaching simulation. The teacher was attempting to have the class act out a scene from Romeo and Juliet. The students mostly stood clustered around the teacher, occasionally wandering off on their own. They often had their backs turned, making it difficult to judge their behavior with their faces obscured. If they were talking or laughing among themselves or otherwise clearly misbehaving, there was no problem assessing them, but if they were silent there was no way to tell with their backs turned if they were paying attention and on-task. I had difficulty randomizing the observations. I had a tendency to get caught up in whatever action was taking place, and would either suspend my observations temporarily or focus on one cluster of students at the expense of those on the fringe of the activity. This perhaps skewed my observations, resulting in an incorrectly high measure of misbehavior, but I can't be sure that the other students weren't also off-task as I was.

The behavior table is attached in Appendix A. The table was expanded to hold the complete data set. The data are summarized below:    

In this case, the null points had little effect on the overall assessment. Looking at the data table in Appendix A, one can see that the misbehavior tended to be infectious. A few on-task ratings tended to be followed by a longer string of misbehaviors. Particularly because the students were standing around in an informal cluster, they had a tendency to get drawn into what was going on in their vicinity. Their close proximity to one another made it easy for them to see each other's behavior and mirror it. Had the students been sitting separately and individually at their desks, neatly lined up in rows facing forward, this may have been less of a problem. They may not have become engaged in the activity, however, resulting in similar off-task scoring. While the students were unruly, they seemed to be getting into the spirit of the playacting, and while they were off-task frequently, my sense is that they weren't really far off and could have been brought back in. The data show strings of on-task behavior, as well as misbehavior. The group dynamics tended to dictate on-task behavior. With the exception of one student with a wandering-off problem, they were either all engaged or all misbehaving.

I found the evaluation process to be highly subjective and uncomfortably imprecise. I was unable to follow the suggested randomization pattern, and just did my best to fairly scan the classroom. An observer would need to gain a fair amount of experience before he could be confident in the reproducibility of his results. Two side-by-side observers in our simulation differed by greater than 10 percentage points.

An acceptable score would seem to be situational. As observed in the Romeo and Juliet scenario, the classroom setting impacted the scoring. If the data were used for trending evaluations and afforded a high margin of experimental error, the teacher might be able to use the data to compare classroom behavior on different days, with different students, or perhaps the same class with a different teacher. The method provides one measure of classroom control and when performed by a trained observer, could become a useful tool for a teacher wishing to monitor classroom performance. I would not be confident comparing the results obtained with this class with those obtained from another type of classroom situation or setting.

Brophy, J. E. (1988). Educating teachers about managing classrooms and students. Teaching and Teacher Education, 4 , 1, 3.

Levin, J. and Nolan, J. F. (1996). Principles of Classroom Management, 2nd edition . Boston: Allyn and Bacon.

  • APSC Careers

Australian Public Service Commission logo

Task 5 - Develop and implement the evaluation plan

The number 5 in bold black text, sitting on a white circle, surrounded by an aqua circle, denoting that this is Task 5

Evaluation goes beyond measuring performance against indicators. It answers questions about quality, value, merit and learning achievements such as whether a learning initiative is worthwhile and achieving the expected outcomes, what works well (or not) and why.

Without defensible evaluation evidence, you can't really know whether learning initiatives are effective or are achieving your desired outcomes.

APS Learning Evaluation Framework

An ring shape divided into three colours, aqua, grey and blue, which demonstrate the cycle of Evaluation, Organisational Insight and Monitoring which drive Evaluation Practice and Evaluation Culture

The APS Learning Evaluation Framework (the Framework) emphasises the interdependent relationship between continuous and routine monitoring , evaluation and organisational insights . The Framework seeks to highlight and strengthen the connection between these three elements. Each of these elements is mutually supportive of the others and assists to improve their qualities as well as its own.

What to include in an evaluation plan

Acquiring rich information about learning transfer and its effects requires different inquiry methods than those used to measure attendance and learner perceptions. You should choose appropriate evaluation data collection methods to enhance the quality and potential impact of your organisational insights.

When seeking to answer evaluation questions, chosen inquiry methods must be feasible to implement within the allocated time period and available resources. Look at the example  Evaluation Plan Template  to better understand how an evaluation plan can help.  

Selecting the right methods for data collection

Acquiring rich information about learning transfer and its effects requires different inquiry methods than those used to measure attendance and learner perceptions. Learning practitioners should choose appropriate evaluation data collection methods to enhance the quality and potential impact of their work on organisational insights. When seeking to answer evaluation questions, chosen inquiry methods must be realistic to implement within your allocated time period and available resources.

Fact sheet 10 in the  handbook   contains information about data collection methods for evaluation.

Telling the story

Narrative methods are a form of qualitative, storytelling data collection that can provide a rich source of powerful feedback about learning transfer and its effects.

Although narrative methods don’t need to be included in every evaluation and shouldn’t be used on their own, incorporating narrative methods into the evaluation plan provides a range of benefits.

Fact sheet 11 in the handbook  describes the benefits of narrative methods in learning evaluation and outlines two narrative methods you can use.

End of Task 5:  Select another tile to continue exploring the Learning Evaluation Handbook.

Contact the aps academy.

For further information and support, or to provide feedback on the Handbook, please visit the APS Academy's contact page .

  • Español – América Latina
  • Português – Brasil
  • Tiếng Việt
  • Collections
  • Optimize Interaction to Next Paint (INP)

Script evaluation and long tasks

When loading scripts, it takes time for the browser to evaluate them prior to execution, which can cause long tasks. Learn how script evaluation works, and what you can do to keep it from causing long tasks during page load.

Jeremy Wagner

When it comes to optimizing Interaction to Next Paint (INP) , most of the advice you'll encounter is to optimize interactions themselves. For example, in the optimize long tasks guide , techniques such as yielding with setTimeout , isInputPending , and so forth are discussed. These techniques are beneficial, as they allow the main thread some breathing room by avoiding long tasks, which can allow more opportunities for interactions and other activity to run sooner, rather than if they had to wait for a single long task.

However, what about the long tasks that come from loading scripts themselves? These tasks can interfere with user interactions and affect a page's INP during load. This guide will explore how browsers handle tasks kicked off by script evaluation, and look into what you may be able to do to break up script evaluation work so that your main thread can be more responsive to user input while the page is loading.

What is script evaluation?

If you've profiled an application that ships a lot of JavaScript, you may have seen long tasks where the culprit is labeled Evaluate Script .

Script evaluation work as visualized in the performance profiler of Chrome DevTools. The work causes a long task during startup, which blocks the main thread's ability to respond to user interactions.

Script evaluation is a necessary part of executing JavaScript in the browser, as JavaScript is compiled just-in-time before execution . When a script is evaluated, it is first parsed for errors. If the parser doesn't find errors, the script is then compiled into bytecode , and then can continue onto execution.

Though necessary, script evaluation can be problematic, as users may try to interact with a page shortly after it initially renders. However, just because a page has rendered doesn't mean that the page has finished loading . Interactions that take place during load can be delayed because the page is busy evaluating scripts. While there's no guarantee that the desired interaction can take place at this point in time—as a script responsible for it may not have loaded yet—there could be interactions dependent on JavaScript that are ready, or the interactivity doesn't depend on JavaScript at all.

The relationship between scripts and the tasks that evaluate them

How tasks responsible for script evaluation are kicked off depends on whether the script you're loading is loaded via a regular <script> element, or if a script is a module loaded with the type=module . Since browsers have the tendency to handle things differently, how the major browser engines handle script evaluation will be touched upon where script evaluation behaviors across them vary.

Loading scripts with the <script> element

The number of tasks dispatched to evaluate scripts generally has a direct relationship with the number of <script> elements on a page. Each <script> element kicks off a task to evaluate the requested script so it can be parsed, compiled, and executed. This is the case for Chromium-based browsers, Safari, and Firefox.

Why does this matter? Let's say you're using a bundler to manage your production scripts, and you've configured it to bundle everything your page needs to run into a single script. If this is the case for your website, you can expect that there will be a single task dispatched to evaluate that script. Is this a bad thing? Not necessarily—unless that script is huge .

You can break up script evaluation work by avoiding loading large chunks of JavaScript, and load more individual, smaller scripts using additional <script> elements.

While you should always strive to load as little JavaScript as possible during page load, splitting up your scripts ensures that, instead of one large task that may block the main thread, you have a greater number of smaller tasks that won't block the main thread at all—or at least less than what you started with.

Multiple tasks involving script evaluation as visualized in the performance profiler of Chrome DevTools. Because multiple smaller scripts are loaded instead of fewer larger scripts, tasks are less likely to become long tasks, allowing the main thread to respond to user input more quickly.

You can think of breaking up tasks for script evaluation as being somewhat similar to yielding during event callbacks that run during an interaction . However, with script evaluation, the yielding mechanism breaks up the JavaScript you load into multiple smaller scripts, rather than a smaller number of larger scripts than are more likely to block the main thread.

Loading scripts with the <script> element and the type=module attribute

It's now possible to load ES modules natively in the browser with the type=module attribute on the <script> element. This approach to script loading carries some developer experience benefits, such as not having to transform code for production use—especially when used in combination with import maps . However, loading scripts in this way schedules tasks that differ from browser to browser.

Chromium-based browsers

In browsers such as Chrome—or those derived from it—loading ES modules using the type=module attribute produces different sorts of tasks than you'd normally see when not using type=module . For example, a task for each module script will run that involves activity labeled as Compile module .

Module compilation work in multiple tasks as visualized in Chrome DevTools.

Once the modules have compiled, any code that subsequently runs in them will kick off activity labeled as Evaluate module .

Just-in-time evaluation of a module as visualized in the performance panel of Chrome DevTools.

The effect here—in Chrome and related browsers, at least—is that the compilation steps are broken up when using ES modules. This is a clear win in terms of managing long tasks; however, the resulting module evaluation work that results still means you're incurring some unavoidable cost. While you should strive to ship as little JavaScript as possible, using ES modules—regardless of the browser—provides the following benefits:

  • All module code is automatically run in strict mode , which allows potential optimizations by JavaScript engines that couldn't otherwise be made in a non-strict context.
  • Scripts loaded using type=module are treated as if they were deferred by default. It's possible to use the async attribute on scripts loaded with type=module to change this behavior.

Safari and Firefox

When modules are loaded in Safari and Firefox, each of them is evaluated in a separate task. This means you could theoretically load a single top-level module consisting of only static import statements to other modules, and every module loaded will incur a separate network request and task to evaluate it.

Loading scripts with dynamic import()

Dynamic import() is another method for loading scripts. Unlike static import statements that are required to be at the top of an ES module, a dynamic import() call can appear anywhere in a script to load a chunk of JavaScript on demand. This technique is called code splitting .

Dynamic import() has two advantages when it comes to improving INP:

  • Modules which are deferred to load later reduce main thread contention during startup by reducing the amount of JavaScript loaded at that time. This frees up the main thread so it can be more responsive to user interactions.
  • When dynamic import() calls are made, each call will effectively separate the compilation and evaluation of each module to its own task. Of course, a dynamic import() that loads a very large module will kick off a rather large script evaluation task, and that can interfere with the ability of the main thread to respond to user input if the interaction occurs at the same time as the dynamic import() call. Therefore, it's still very important that you load as little JavaScript as possible.

Dynamic import() calls behave similarly in all major browser engines: the script evaluation tasks that result will be the same as the amount of modules that are dynamically imported.

Loading scripts in a web worker

Web workers are a special JavaScript use case. Web workers are registered on the main thread, and the code within the worker then runs on its own thread. This is hugely beneficial in the sense that—while the code that registers the web worker runs on the main thread—the code within the web worker doesn't. This reduces main thread congestion, and can help keep the main thread more responsive to user interactions.

In addition to reducing main thread work, web workers themselves can load external scripts to be used in the worker context, either through importScripts or static import statements in browsers that support module workers . The result is that any script requested by a web worker is evaluated off the main thread.

Trade-offs and considerations

While breaking up your scripts into separate, smaller files helps limit long tasks as opposed to loading fewer, much larger files, it's important to take some things into account when deciding how to break scripts up.

Compression efficiency

Compression is a factor when it comes to breaking up scripts. When scripts are smaller, compression becomes somewhat less efficient. Larger scripts will benefit much more from compression. While increasing compression efficiency helps to keep load times for scripts as low as possible, it's a bit of a balancing act to ensure that you're breaking up scripts into enough smaller chunks to facilitate better interactivity during startup.

Bundlers are ideal tools for managing the output size for the scripts your website depends on:

  • Where webpack is concerned, its SplitChunksPlugin plugin can help. Consult the SplitChunksPlugin documentation for options you can set to help manage asset sizes.
  • For other bundlers such as Rollup and esbuild , you can manage script file sizes by using dynamic import() calls in your code. These bundlers—as well as webpack—will automatically break off the dynamically imported asset into its own file, thus avoiding larger initial bundle sizes.

Cache invalidation

Cache invalidation plays a big role in how fast a page loads on repeat visits. When you ship large, monolithic script bundles, you're at a disadvantage when it comes to browser caching. This is because when you update your first-party code—either through updating packages or shipping bug fixes—the entire bundle becomes invalidated and must be downloaded again.

By breaking up your scripts, you're not just breaking up script evaluation work across smaller tasks, you're also increasing the likelihood that return visitors will grab more scripts from the browser cache instead of from the network. This translates into an overall faster page load.

Nested modules and loading performance

If you're shipping ES modules in production and loading them with the type=module attribute, you need to be aware of how module nesting can impact startup time. Module nesting refers to when an ES module statically imports another ES module that statically imports another ES module:

If your ES modules are not bundled together, the preceding code results in a network request chain: when a.js is requested from a <script> element, another network request is dispatched for b.js , which then involves another request for c.js . One way to avoid this is to use a bundler—but be sure you're configuring your bundler to break up scripts to spread out script evaluation work.

If you don't want to use a bundler, then another way to get around nested module calls is to use the modulepreload resource hint , which will preload ES modules ahead of time to avoid network request chains.

Optimizing evaluation of scripts in the browser is no doubt a tricky feat. The approach depends on your website's requirements and constraints. However, by splitting up scripts, you're spreading the work of script evaluation over numerous smaller tasks, and therefore giving the main thread the ability to handle user interactions more efficiently, rather than blocking the main thread.

To recap, here are some things you can to do to break up large script evaluation tasks:

  • When loading scripts using the <script> element without the type=module attribute, avoid loading scripts that are very large, as these will kick off resource-intensive script evaluation tasks that block the main thread. Spread out your scripts over more <script> elements to break up this work.
  • Using the type=module attribute to load ES modules natively in the browser will kick off individual tasks for evaluation for each separate module script.
  • Reduce the size of your initial bundles by using dynamic import() calls. This also works in bundlers, as bundlers will treat each dynamically imported module as a "split point," resulting in a separate script being generated for each dynamically imported module.
  • Be sure to weigh trade-offs such as compression efficiency and cache invalidation. Larger scripts will compress better, but are more likely to involve more expensive script evaluation work in fewer tasks, and result in browser cache invalidation, leading to overall lower caching efficiency.
  • If using ES modules natively without bundling, use the modulepreload resource hint to optimize the loading of them during startup.
  • As always, ship as little JavaScript as possible.

It's a balancing act for sure—but by breaking up scripts and reducing initial payloads via dynamic import() , you can achieve better startup performance and better accommodate user interactions during that crucial startup period. This should help you score better on the INP metric, thus delivering a better user experience.

Hero image from Unsplash , by Markus Spiske .

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2023-05-09 UTC.


  1. How To Conduct Competency-based Performance Evaluations: Using Rubrics

    task 5 evaluation time

  2. PPT

    task 5 evaluation time

  3. Task Analysis Infographic

    task 5 evaluation time

  4. Managing Performance Task Assessments Infographic

    task 5 evaluation time

  5. Evaluation Time

    task 5 evaluation time

  6. rubrics for assessment

    task 5 evaluation time


  1. Advancing Health Equity through the Strategic Prevention Framework (SPF): Session 5

  2. Problem set 44

  3. PERT

  4. UE5 Time of Day Blueprint System

  5. Evaluation #5

  6. AP Statistics Unit 5 Progress Check 1(d)


  1. Time Management Skills: Performance Review Examples (Rating 1

    Ratings for time management skills can be assigned as follows: 5 - Outstanding: Consistently exhibits exceptional time management skills and always meets or exceeds deadlines. 4 - Exceeds Expectations: Displays strong time management skills and usually completes tasks ahead of schedule. 3 - Meets Expectations: Adequately manages time and ...

  2. Task 5

    Task 5 - First Attempt D171 Curriculum Instruction and Assessment Analyzing Alignment B. Lesson Plan (from Task 2) Evaluation Proficient 3 Points Emerging 2 Points Beginning 1 Point Incomplete 0 Points Total Points. A1a: Lesson Learning Objective/s & Academic Standard/s

  3. Deadlines

    Deadlines - On time: Needs Improvement Phrases. Agrees to a deadline that one knows is unattainable thus ends up unsatisfying the person who assigned the project and ruins own reputation. Allows oneself to feel overwhelmed with the thought of completing a large project instead of breaking the project down into smaller, manageable tasks.

  4. Time Management: 40 Useful Performance Feedback Phrases

    Time Management: Exceeds Expectations Phrases. Excels at prioritizing work by avoiding being involved in endless details. Always on target and on time with assigned tasks and is a master of time management. Generates more work than expected even when working under pressure and tight deadlines. Trusted to deliver urgent work without compromising ...

  5. Unit 3

    Página Principal / Cursos / INGLÉS A2 - (900002B_1391) / Evaluation / Unit 3 - Task 5 - Challenging myself. Quiz Time! - Evaluation quiz Read the text and answer the question. Polar Bear Makes the List An endangered species is an animal, plant or any other kind of wildlife that is likely to face extinction in its natural habitat.

  6. Time Management Self-Assessment

    I often procrastinate when faced with tasks. 3: I make good use of small blocks of time. 4: I seldom prioritize among tasks. 5: I use my time wisely. 6: I find it difficult to resist pressure from others for my time. 7: I plan how my goals will be reached. 8: I lack balance in my life. 9: I can motivate myself to complete even boring tasks. 10

  7. Multi-Tasking: 40 Useful Performance Feedback Phrases

    Multi-Tasking: Exceeds Expectations Phrases. Evaluates when multitasking is necessary; whether it is going to help one get more work done or it is only going to result to one doing multiple tasks slowly or badly. Sets aside time for intense or complex tasks that require one's full concentration. Chooses tasks that are fluent, routine, and ...

  8. Unit 3

    Unit 3 - Task 5 - Challenging myself. Quiz Time! - Evaluation quiz_ Revisión del intento - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Revision

  9. How to Present Your Evaluation Timelines: 4 Simple Ideas

    4. Chronological List. Maybe not the most visually appealing, but this method of visualization evaluation timelines checks the box for simple and easy to create. Putting the information into a table, then hiding the borders gives your list some easy structure. If you want to get fancy, you can add in colour coding or icons.

  10. 7 Essential Time Management Skills

    7 time management skills. If you're ready to take control of your time, work on developing these seven time management skills. 1. Prioritization. To effectively manage your time, you will need to decide in which order you should complete your tasks. Reviewing your schedule each day and labeling your to-do list with whether tasks are urgent ...

  11. Task review: Practical steps for success

    Setting aside time for the task review means advance scheduling of meetings, deadlines and events to ensure that you save time for the most important work. Eliminate distractions such as social media, emails, office chat, and get rid of distractions that encroach on the time allotted for the review process. Avoid multitasking during the review ...

  12. 45 Time Management Performance Review Phrases To Use

    Tracks the progress of projects. Commits to estimating the time required to complete projects. Creates practical schedules and delegates responsibilities to others. Demonstrates a commitment to completing work on time. Commits to meeting deadlines. Makes productive use of time. Follows established time-management systems.

  13. Task 5 Quiz

    Task 5 Quiz. 1. Multiple Choice. Using stories to bring about deeper understanding is an example of which facet of understanding? 2. Multiple Choice. Which characteristic of high quality assessment addresses the question, "Would the same outcome occur across administrations, over time or across raters?". 3. Multiple Choice.

  14. How to Evaluate Your Teamwork and Time Management Skills

    5 Review and reflect regularly. The fifth step to evaluate your teamwork and time management skills is to review and reflect regularly on your performance and results. You can use tools, such as ...

  15. A Guide to Task-Based UX Metrics

    When conducting a task-based evaluation, your task metrics should include a mix of what people do (behavioral or "action" metrics) and what people think (attitude metrics). You might also include one or more behavioral/physiological or combined metrics. In this article, we provide a guide to task-based UX metrics. 1. Action Metrics.

  16. Time on Task Evaluation

    Time-on-Task Evaluation ©1999 Eileen Bonine email. ... When evaluating a class for "time-on-task" one is asked to scan the classroom, noting and recording individual student behavior at regular time intervals. The experience and skill level of the observer will help determine the observation interval, but an interval of approximately 5 seconds ...

  17. How to Evaluate Task Outcomes and Impacts

    4. Report and communicate your results. Be the first to add your personal experience. Task stakeholder feedback and evaluation methods are essential for measuring the success and impact of your ...

  18. Task 5

    Task 5 - Develop and implement the evaluation plan. Evaluation goes beyond measuring performance against indicators. It answers questions about quality, value, merit and learning achievements such as whether a learning initiative is worthwhile and achieving the expected outcomes, what works well (or not) and why.

  19. Story Points

    Task assessment and commitment. Evaluation of a task is simply an identification of the amount of time that will be spent on its implementation, but not an exact date for completing the work. The ...

  20. Script evaluation and long tasks

    In this case, the work is enough to cause a long task that blocks the main thread from taking on other work—including tasks that drive user interactions. Script evaluation is a necessary part of executing JavaScript in the browser, as JavaScript is compiled just-in-time before execution. When a script is evaluated, it is first parsed for ...

  21. 5 Simple Time Management Strategies To Stop Wasting Time

    The Pomodoro Technique is favored for enhancing focus and productivity. This method involves breaking work into intervals, typically 25 minutes in duration, followed by short breaks. By working in ...

  22. Task 5

    Unit 3 - Task 5 - How Much Did I Learn_ - Evaluation Quiz_ Revisión Del Intento 1 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document summarizes the results of an English language evaluation quiz taken on July 31, 2022 from 8:20pm to 8:59pm. The student scored 24 out of 35 points, equivalent to a grade of 69%.

  23. Task 5

    Unit 3 - Task 5 - How Much Did I Learn_ - Evaluation Quiz_ Revisión Del Intento 2 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. The student scored 27 out of 35 points (38.6%) on a 15 question English language quiz evaluating what they learned in Unit 3. The teacher provided feedback that the student did well but needs to continue studying.

  24. INGLÉS A2

    INGLÉS A2 - (900002B_1391) Página Principal Cursos INGLÉS A2 - (900002B_1391) Evaluation Unit 3 - Task 5 - Challenging myself. Quiz Time! - Evaluation quiz Comenzado el Sunday, 30 de April de 2023, 11:18 Estado Finalizado Finalizado en Sunday, 30 de April de 2023, 12:16 Tiempo empleado 58 minutos 24 segundos Puntos 15,0/15,0 Calificación 70 ...