Identify data sources used to evaluate the effectiveness of implemented technology.
Chapter 2: Evaluation OverviewWhy Evaluate?An evaluation is a systematic assessment of how well a project or program is meeting established goals and objectives. Evaluations involve collecting and analyzing data to inform specific evaluation questions related to project impacts and performance.1 This performance information enables project managers to: Show
Evaluations can be used at different points in the project lifecycle. For example, some evaluations are conducted during implementation to assess whether a technology is operating as planned, while others are conducted post-implementation to assess the outcomes and impacts of a technology. Figure 1 shows where ATCMTD evaluation activities fit in the project lifecycle. During the pre-implementation phase, as the project design is underway, evaluation planning must also be conducted. The remainder of this chapter describes these key evaluation planning activities. During the implementation phase, as the technology is being tested and fully implemented, the data collection methods should also be tested and any baseline data collection should be completed (baseline data also may have been collected during pre-implementation). Once the technology has been implemented, post deployment data are collected for the duration of the evaluation period. Grantees should report interim as well as final evaluation/performance measurement findings in their Annual Reports (see Appendix B for Annual Report template). Figure 1. Graphic. Project Lifecycle2 ATCMTD evaluations can largely be characterized as outcome evaluations. Outcome evaluations focus on whether a program or project has achieved its results-oriented objectives. However, the ATCMTD grantees should consider ways to measure interim progress toward their outcomes. Early measurement will inform interim improvements, as necessary, and also provide input into the required Annual Reports that document the benefits, costs, and effectiveness (among other measures) of the technologies being deployed. Evaluations should be systematically planned and executed to ensure findings are credible and actionable. The remainder of this section describes this systematic approach to an evaluation. When planning evaluations, constraints that may impact the ability to conduct evaluation activities should be taken into account. In particular, evaluations should consider the financial and staff resources available for the assessment. Assembling an Evaluation TeamIndependent evaluators bring:
Help ensure the results are:
The first step in conducting a project evaluation is assembling an evaluation team. Evaluations can be conducted using an internal evaluation team, independent evaluators, or a mix of both. Evaluators should be brought on board as early as possible so that the design of the evaluation can occur as the deployment is being planned and the project generates sufficient data to support the evaluation. Given the reporting requirements in the FAST Act, it is recommended that an independent evaluator be used to design and manage ATCMTD evaluations. Due to the complex nature of ATCMTD systems and technologies, evaluators should work closely with the ATCMTD project team.3 Evaluators should have regular access to the project team members who are implementing the technology and collecting the data. The project team should set up regular opportunities for the evaluators to work with data providers during and after the data collection period. Data issues are common, and it's best to troubleshoot these issues collaboratively. Evaluation Planning ProcessDeveloping an evaluation plan puts grantees in the best position to identify and collect the data needed to assess the impacts of their ATCMTD technology deployments. This plan is a blueprint for the evaluation; it includes the specifics of the evaluation design and execution, as well as a description of the project and its stakeholders. Table 1 describes the activities involved in evaluation planning and execution, each of which will be discussed in this chapter. Several templates are also included to assist grantees in structuring and documenting their evaluation and performance measurement plans. Table 1. Evaluation Planning and Execution.
Set Evaluation Goals/ObjectivesGuiding an evaluation is an agreed upon set of project goals and objectives to drive the evaluation design. These goals and/or objectives should represent the core of what the project is trying to achieve. A logic model can be a helpful tool for evaluation teams to use as they identify goals, objectives, and related information needs. A logic model is a systematic and visual way to present and share your understanding of the relationships among the project resources, the planned activities, and the changes or results that the project hopes to achieve. In short, a logic model illustrates how the program's activities can achieve its goals. A logic model generally includes: resources or inputs, activities, outputs, outcomes, and impacts (see Figure 2). Figure 2. Graphic. Project or Program Logic Model4 Additional details on logic models can be found at the following link: ATCMTD project goals align with the priorities established in the FAST Act. These priorities relate to the use of advanced transportation technologies to improve safety, mobility, environment, system performance, and infrastructure return-on-investment. Table 2 includes some of the priority goal areas listed in the FAST Act (i.e., as described in 23 U.S.C. 503(c)(4)(F) and 23 U.S.C. 503(c)(4)(G), which outline the requirements for the Annual Reports and the Program Level Reports, respectively), along with potential objectives that should be considered in the development of project goals/objectives (see Chapter 3 for a set of recommended performance measures for each goal area).
Develop Evaluation QuestionsOnce goals and objectives have been established, specific research questions (or hypotheses) can be developed. These questions will be addressed through data collection, analysis, and interpretation. There should be at least one (and ideally several) evaluation questions in support of each goal. When designing evaluation questions, consider the following guidance:
Generally, evaluation questions indicate, either explicitly or implicitly, a desired outcome or impact (e.g., reduced traffic crashes, improved travel time reliability, etc.). If the desired outcome or impact is not achieved, however, the evaluation should describe the actual results and address reasons (or potential reasons) that may account for the difference between the desired and the actual results. Table 3 provides a template for how to organize evaluation goals, objectives, and questions (a limited set of examples are included for descriptive purposes only). Table 3. Template with Example Evaluation Goals, Objectives, and Evaluation Questions.Note: Examples are included for illustrative purposes only.
Identify Performance MeasuresAs grantees develop their evaluation questions, it is important to begin identifying the performance measures or information that will address each evaluation question. The performance measures will be used to assess whether improvements and progress have been made on the safety, mobility, environmental, and other goal areas of the ATCMTD Program (as described in the Fast Act). In developing performance measures:
Chapter 3 provides additional guidance on performance measures, including recommended measures specific to fulfilling the requirements set forth in the FAST Act. Develop Evaluation DesignWhile identifying the evaluation questions and performance measures, grantees should also be developing an appropriate evaluation design that describes how, within the constraints of time and cost, they will collect data that addresses the evaluation questions. This process entails identifying the experimental design, the sources of information or methods used for collecting the data, and the resulting data elements. Experimental DesignThe experimental design frames the logic for how the data will be collected. Evaluations of technology deployments often utilize a before-after design, whereby pre-deployment data (i.e., baseline data) is compared to data that are collected following the deployment of the technology. For certain evaluation questions, however, it may be appropriate to collect data only during the "after" period. For example, for measures related to user satisfaction with a technology, the design could include surveys only in the post-deployment period. More robust designs, such as randomized experimental and quasi-experimental designs, utilize a control group that does not receive the "treatment" of a program's activities to account for potential confounding factors (see Data Limitations or Constraints for more information on confounding factors). The same data collection procedures are used for both the treatment and control groups, but the expectation is that the hypothesized outcome (improved, safety, mobility, etc.) occurs only within the treatment group and not the control group. Evaluation designs are applied to the different methods or information sources (see next section) that are utilized in the evaluation. Data Collection MethodologyThe evaluation team should consider the appropriate method(s) for addressing each of their evaluation questions. For any given evaluation question, there may be multiple methods used to address it. For example, agency efficiency evaluation questions may include an analysis of agency operations data, as well as qualitative interviews with agency personnel. The same method may be used to address multiple evaluation questions. Vehicle field test data (e.g., CV data) may be used to inform both mobility and safety-related evaluation questions. When developing data collection methods, thought should be given to the specific data elements that will be gathered from each method, and whether those data elements meet the needs of the evaluation (e.g., address the evaluation questions, are available in the units required for the performance metric, etc.). Data elements will be either quantitative or qualitative, and can take many forms (e.g., speed data, crash data, survey responses, interview responses, etc.). Table 4 highlights examples of key methods, their data sources, and data collection considerations for each method. Table 4. Examples of Data Collection Methods.
Data Limitations or ConstraintsExample Confounding Factors:
For each evaluation question, it is important to consider any limitations or constraints that may affect your ability to collect the data or may affect the data collected. Examples of constraints include:
Identifying ways to mitigate these data limitations or constraints will enhance the ability to collect useful data. The evaluation team also should consider whether there are confounding factors that may impact the evaluation and should track such factors for the duration of the evaluation. A confounding factor is a variable that completely or partially accounts for the apparent association between an outcome and a treatment. Confounding factors are usually external to the evaluation; hence, they may be unanticipated or difficult to monitor. If grantees are using a before-after design without a control (i.e., a non-experimental design), it is particularly important to consider potential confounding factors that may be the cause of a change in the before-after data. Grantees should avoid attributing a change in outcomes to the technology deployment when in fact it is due to some other factor. Potential mitigation approaches should also be identified for each confounding factor. As grantees are thinking through the key components of their evaluation, including the evaluation questions, performance measures, data sources, data collection methodology, and data limitations, it is recommended that they document this information in the Evaluation Plan. The following template (see next page) is designed to provide grantees with a useful tool for summarizing this evaluation information. Table 5. Example Methodology Template.Note: Examples are included for illustrative purposes only
For projects where data collection location, frequency, etc. may vary across the different technologies being deployed, it may be useful to document these data collection characteristics or procedures. See Table 6 below, which includes an example for illustrative purposes only. Table 6. Template for Data Collection Procedures.
Develop Data Management ProceduresIn most cases, grantees will be collecting significant amounts of data to support their evaluation and operations, and there are a number of data-related issues that need to be considered during evaluation planning. Management of data collected during the ATCMTD project may be documented in the Evaluation Plan but grantees are strongly encouraged to develop a separate data management plan (DMP) during the pre-implementation phase which describes how the project team will handle data both during and after the project. This DMP can be updated with more information as the project proceeds. In planning for data management, grantees should consider how data will be captured, transferred, stored, and protected. The evaluation team will need to work closely with the project team to ensure that these protocols are put in place prior to the data collection period. Data management protocols include:
Grantees must provide USDOT the results of their evaluation via their Annual Reports required by the FAST Act (for template, see Appendix B) and this should be reflected in their DMPs. Although not required, USDOT encourages grantees to make other relevant data available to the USDOT and the public to further advance the objectives of the ATCMTD program. For example, projects may provide the USDOT access to the underlying data used to determine the costs and benefits described in the report. The DMP should indicate whether project data contains confidential business information and personally identifiable information (PII), whether such data will be shared in a controlled access environment, or removed prior to providing public or USDOT access. Additional voluntary guidance on creating DMPs can be found at the following link: https://ntl.bts.gov/public-access/creating-data-management-plans-extramural-research. Design Analysis PlanGrantees are encouraged to develop an analysis plan that describes how the evaluation data are going to be organized and analyzed. The analysis plan may be documented as a section of the Evaluation Plan, in the DMP, or a separate document. The analyses must be structured to answer the questions about whether change occurred and whether these changes can be attributed to the deployment. During evaluation planning, the evaluation team must determine the types of analyses that it plans to conduct (e.g., statistical procedures), so that the evaluation can be designed to produce the required data. For each of the evaluation questions, the evaluation plan should provide sufficient detail on how the data will be analyzed. Since evaluation data may come from multiple sources, e.g., experimental design (field-tests), surveys, interviews, historical data, etc., different types of analyses may be used in an evaluation. Analysis methods may include descriptive statistics and statistical comparisons, as well as qualitative summaries and comparisons (e.g., based on interview data). Modeling or simulation may also be used as analytic methods. Execute the Evaluation PlanExecuting the evaluation includes the collection of the data, the analysis of the data, and the development of findings. Acquire or Collect DataDuring data collection, the project team is capturing the data that have been identified in the evaluation plan. As detailed in previous sections, this may include system performance data, vehicle or infrastructure data, and survey responses, among other data elements. Pilot StudiesPrior to the start of data collection, it is advisable to conduct a data collection pilot that tests the end-to-end data collection pipeline, particularly for new systems or tools (i.e., where there is no previously established data collection mechanism). For example, for Automated or Connected Vehicle projects involving the collection of vehicle data, the pilot test should include logging data in its final format, offloading the data from the technology/vehicles/equipment, processing it, and transmitting it to where the evaluators will use it. Evaluators should be part of this feedback loop to make sure that the data are acceptable, including providing feedback on the format of sample data sets prior to the end-to-end test. In addition to a pilot study (that tests the data collection protocols), system acceptance testing should also be conducted, whereby the project team assesses whether or not the technology functions as designed. For projects involving surveys, a pilot involves testing the completed survey with a small set of respondents prior to the full launch. This will enable the project and evaluation teams to work through any issues with question regarding relevance or interpretability, survey length, or other problems (e.g., data coding, processing, and storage) prior to full survey launch. This ensures that once the data collection begins, the evaluators are confident that the data will meet their evaluation needs. During the data collection pilot, complete data documentation should be generated to accompany the data. This is a general best practice but particularly important if a third party evaluator will be conducting the evaluation, staff turnover may occur on the project, or data will be made available to others down the road. At a minimum, data documentation should include:
Where possible, grantees should leverage insights from previous projects, including USDOT-funded intelligent transportation systems (ITS) research, to determine the right data formats and documentation to support evaluation. For example, data and documentation from past and current ITS research projects can be found through the USDOT's ITS DataHub at https://www.its.dot.gov/data/. Analyze Data and Draw ConclusionsData analysis techniques and methods will vary greatly, depending on the evaluation design and the type of data that is collected. For all deployments, however, the analyses must be structured to answer two questions:
During evaluation planning, the evaluation team must determine the types of analyses that it plans to conduct (e.g., statistical procedures), so that the evaluation can be designed to produce the required data. Develop Annual Report(s)The FAST Act requires that grantees submit Annual Reports. This Evaluation Methods and Techniques document provides guidance on how to structure an evaluation that will produce the data needed to meet this reporting requirement. According to the FAST Act (23 U.S.C. 503(c)(4)(F)), "For each eligible entity that receives a grant under this paragraph, not later than 1 year after the entity receives the grant, and each year thereafter, the entity shall submit a report to the Secretary that describes -
An Annual Report template has been designed to assist grantees in meeting their annual reporting requirement (see Appendix B). While evaluation-related activities are underway, grantees are asked to provide annual updates on their activities, organized by specific goal areas. In addition to a general summary of evaluation-related activities, these updates may include the status of baseline data collection (if applicable), data collection challenges, and evaluation milestones, among other information. Once data collection is completed, grantees are asked to report on their findings for each relevant goal area, and to note any particularly innovative or noteworthy findings. In order to collect information specified in the FAST Act, the template includes additional questions on how the project has met original expectations, a comparison of the benefits and costs of the project, lessons learned, and recommendations for deployment strategies. Evaluation ReferencesAdministration For Children and Families Office of Planning, Research and Evaluation. (2010). The Program Manager's Guide to Evaluation Second Edition. Washington D.C. Barnard, Y. (2017). D5.4 Updated Version of the FESTA Handbook. Leeds, UK: FOT NET Data. Dillman, D. A., Smith, J. D., & Christian, L. M. (2014). Internet, Phone, Mail and Mixed-Mode, Fourth Edition. Hoboken: John Wiley & Sons. Gay, K., & Kniss, V. (2015). Safety Pilot Model Deployment: Lessons Learned and Recommendations for Future Connected Vehicle Activities. Washington, D.C.: Intelligent Transportation System Joint Program Office. Groves, R. M., Fowler, Jr, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology, Second Edition. Hoboken: John Wiley & Sons, Inc. Marsden, P. V., & Wright, J. D. (2010). Handbook of Survey Research, Second Edition. Bingley: Emerald Group Publishing Limited. Smith, S., & Razo, M. (2016). Using Traffic Microsimulation to Assess Deployment Strategies for the Connected Vehicle Safety Pilot. Journal of Intennigent Transportation Systems, 66-74. W. K. Kellogg Foundation. (2004). Logic model development guide, (Figure 2. How to Read a Logic Model) Battle Creek, MI, obtained from: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide Why is it important to evaluate the safety and effectiveness of Technology?As emerging healthcare technology comes into greater focus, it’s important for healthcare systems to evaluate the safety and effectiveness in order to effectively harness new technology for better care and reduced costs.
How do you evaluate a new technology?Perhaps the most underrated criterion to evaluate is the ecosystem of customers, partners, and knowledge surrounding a technology. On one side of the spectrum, you may have a brand new tool with shiny features but only a few customers, a few developers who can adapt it, and a few online resources.
What data should be included in an evaluation plan?Evaluation Data Sources. Evaluation plans should illustrate how, where, and from what sources data will be collected. Quantitative (numeric) and qualitative (narrative or contextual) data should be collected within a framework that aligns with stakeholder expectations, project timelines, and program objectives.
How can data collection and analysis be used to manipulate data?Regardless of collection method, after data are digitized, analytic and statistical software can be used to manipulate the data set in multiple ways to answer diverse questions.
What are the 5 sources of data?The data which is to be analyzed must be collected from different valid sources.. Interview method: ... . Survey method: ... . Observation method: ... . Experimental method:. What are data sources in evaluation?A data source is an entity that provides information that has been systematically collected. Some examples include administrative records, surveillance systems, or surveys.
What techniques are used to evaluate the implementation?Evaluate the Implementation Process. Types of evaluations. ... . Overview of process evaluation. ... . Before implementation: Planning to evaluate the process. ... . Elements of data collection. ... . Develop a Process Evaluation Plan. ... . Ready to implement: Conduct a process evaluation and analyze the results. ... . Before moving on to Step 8.. What are the types of data sources?There are three types of data sources: relational. multidimensional (OLAP) dimensionally modeled relational.
|