Search This Blog

Monday, July 18, 2016

Update

I am honored that over 40,000 viewers have visited my blog. As of today...my status as an ABD (All but dissertation) remains. I offer online tutoring in successfully writing academic research as well as very low cost editing of research drafts, which includes proper ADA formatting. Please feel free to email me queries and comments. God bless America!!

Saturday, April 18, 2015

Greetings

Achieving success in academic pursuits can be compounded when one's school has a mission to hang on to a student to gain as much tuition money as possible rather than supporting a student with a 3.9 GPA to achieve their dissertation's approval.  I've achieved my lifetime's goal in my academic pursuits despite that mission, which countered integrity.  My blog is reaching 30,000 hits, and of this achievement I am proud. Many blessings to those who walk their own path.

Monday, March 4, 2013

Writing the Dissertation.....

Just wanted to update my blog.  I am a few months from finishing my dissertation on the academic performance issues with American secondary students.  I also want to thank Dr. Robin Throne for recently adding qualifications to me at my Indeed profile.  I will add to my blog as time allows over the next few months.  I am close to reaching 10,000 hits for the material in my blog, and that makes me happy that my writing may be of help to others.  Thank you, and wish me luck on my PhD pursuit!

Sunday, December 16, 2012

Best Practices Relevant to Instructing Post-Secondary E-Learners

        A review of critical elements in research methodologies improves an instructor's research skills.  Improving acumen in research methods by examining academically grounded resources contributes to the adoption and application of best practices.  A review of research methodologies follows.  The research studies reviewed include (a) e-learning technologies in a design-based research (DBR) study by Bower (2011), (b) a faculty's motivational factors affecting the integration of a learning management system by Gautreau (2011), (c) analysis of users' satisfaction with e-learning by Chen and Lin (2008), (d) team learning in technology-mediated distributed teams by Andres and Shipps (2010), and (e) field experiences promoting cross-cultural awareness in preservice teachers by Malewski, Sharma, and Phillion (2012). 
A Design-Based Research Study
            Appraising the design-based research (DBR) methodology included reviews of Bower (2011), Akilli (2008), and Amiel and Reeves (2008).  The reviews included details about appraising validity and reliability in DBR studies.  An understanding of the similarities and differences between research designs learned previously provided a better analysis of Bower's (2011) study.  Reviewing research by Akilli (2008), which explained the differences between quantitative, qualitative, mixed methods, and DBR methodologies, provided more comprehensive information than Bower (2011) and Amiel and Reeves (2008).  A review of Akilli (2008) and Amiel and Reeves (2008) precedes the analysis of Bower's (2011) study. 
            Akilli (2008) wrote that the flexible processes used in DBR help designers and instructors to expect and integrate changes repeatedly during the DBR process.  Such a design is reminiscent of continuous quality improvements used in previous employment experiences.  Quality improvements also require carefully documented processes during retesting.
           The second article by Amiel and Reeves (2008) discussed that research methods that investigate the learning functions of tools and techniques have weaknesses that DBR can resolve.  In fact, Amiel and Reeves (2008) argued that some researchers' work focusing upon in-use technologies to improve learning proffered feeble systematic advice to instructors.  Many educational researchers recommend DBR methods because its goal strengthens the bridge between "educational research and real-world problems" (Amiel and Reeves, 2008, p. 35).  The methodology that coincides with DBR is not prepared as a typical research design.  However, as noted above, DBR resolves weaknesses that other designs cannot thereby contributing to improved learning.  Therefore, an analysis of Bower's (2011) study was included below.
The Research Problem, Questions, or Hypotheses
            Bower (2011) explained that using synchronous web-conferencing tools is less complex than employing asynchronous e-tools.  The characteristics of asynchronous environments include (a) instructors who need in-services to keep technical skills up-to-date, (b) an increasing use of e-tools as new tools replace older tool, (c) learning institutes that need sufficient and progressive technical systems, (d) consideration of learners' technical and collaborative skills, and (e) integration of e-tools within an institution's technical systems and curriculums that demands vigilance (Bower, 2011).  By using web-conferencing, instructors introduce "less transmissive and more active distance learning pedagogies" (Bower, 2011, p. 64).  Barab and Squire (2004) remarked that rather than identifying hypotheses, DBR looks at several design elements that results in profiling a design's attributes.
The Research Purpose
            Bower (2011) posited that web-conferencing was more successful than asynchronous learning because a student's collaborative competency and technical skill is often sufficient for web-conferencing but not for asynchronous learning.  Improved success results in improved knowledge construction.  Students' and instructors' cognitive load during learning technological and collaborative skills increases stress that reduces abilities to learn and teach curricula (Bower, 2011).  Bower's (2011) research purpose included identification of web-conferencing competencies, and its impact on learning.  Gaps exist in literature regarding systematic empirical studies that tested student's and instructor's collaborative skills in synchronous or asynchronous settings (Bower, 2011). 
The Type of Design and Elements of the Design
            The DBRs iterative method evaluates an "innovative product or intervention …refines the innovation, and produces design principles" (Amiel and Reeves, 2008, p. 34) usable by future researchers.  Bower (2011) also explained instructors use DBR as a means to create models that support knowledge-building in a naturalistic learning environment.  Bower's (2011) design elements included repetitive series of "theory-based analysis, design, and testing" (p. 67).  Apparently, these elements sought to extract principles for generating useful learning platforms.  Bower's (2011) intention for using DBR was due to its capacity to validate genuine learning environments, and its practicality for informing a learning design's solution.
            Operational variables were two-fold.  The teacher who was also the participant researcher  recorded, and reviewed lessons from an introductory computer programming class.  The phenomenological framework included documentation of observations by the teacher (Bower, 2011).  Recording and documenting the learners' and instructor's acumen occurred during synchronous collaboration (Bower, 2011).  The data included the range of effects and relative prevalence.  DBRs variables are also characterized by "multiple dependent variables, including climate variables (such as collaboration among learners, and available resources), outcome variables (such as learning of content, and learning transfer), and system variables (such as dissemination, and sustainability)" (Barab and Squire, 2004, p. 4).
            The total sample studied over two years during three semesters initially included 26 enrolled students represented by nine females and 17 males.  However, a total of six students dropped the course leaving only 20 students by the study's end.  Other design elements included observation of three levels of synchronous collaboration, which were operational, interactional, and managerial (Bower, 2011).
Details Regarding Threats to Validity and How Resolved
            Because Bower (2011) did not refer to validity, a review of Sandoval and Bell (2004) provided the necessary information to assess Bower's (2011) validity.  No scientific analysis supported validity within Bower's (2011) study.  However, design-based research parameters may not require such an analysis, and ultimately provide valid results.  Sandoval and Bell (2004) indicated that significant discomfort arises from the research community with educational research because scientific methods are not always employed. 
            However, decreased threats to validity and reliability are achievable by methodological alignment as observations redolent of DBR research occurs.  For example, theoretical knowledge increases due to observations over the course of a DBR that results in improved interventions (Sandoval and Bell, 2004).  Bower (2011) could have infused validity by modifying the design to test emerging assumptions as the study progressed.  Educators wishing to adopt Bower's (2011) observations cannot be assured that learning outcomes would occur and Bower's (2011) could be detrimental to students.  
            Akilli (2008) noted that DBR researchers have used mixed methods to assess interventions' outcomes, and revised interventions accordingly.  "Mixed methods warrants objectivity, validity, credibility, and applicability of the findings" (Akilli, 2008, p. 3) due to measurements between data.  As noted, Bower (2011) did not embrace mixed methods.  Rather, Bower's (2011) DBR focused upon "characterizing situations (as opposed to controlling variables) (and) developing a profile or theory that characterizes the design in practice (as opposed to simply testing hypotheses)" (Barab and Squire, 2004, p. 3).  Because Bower (2011) did not employ research methods that could improve validation of his findings, the meaning, significance, and ability to add to learning theory was uncompelling. 
            If Bower (2011) wanted to further theoretical knowledge for learning in academic settings, Barab and Squire (2004) indicated that advancing knowledge required evidence-based claims.  Bower's (2011) research does not fulfill the requirements of evidence-based claims.  Until Bower (2011) responds to the requirements of DBR as noted by Barab and Squire (2004) and others, the robustness of Bower's (2011) study is unsupportable.  Furthermore, Bower (2011) needs to address validity associated with an approval from an institutional review board due to the involvement of human participants.      
Findings from Research and Implications
            Bower (2011) reported that the web-conferencing learning environment was associated with improved learner satisfaction.  For example, learners received just-in-time web-conferencing training, which helped learners to remember the technical steps to maneuver within the learning environment (Bower, 2011).  As learners' proficiencies increased, learners gained more control of the learning environment, and interactions climbed.  Bower (2011) observed that instructors' lower-level competencies, which used operational and interaction skills, required less effort than higher level management and designing competencies. 
            Less effective learning occurred due to instructors misusing and misunderstanding the learning environment.  Bower (2011) recommended pre-class tutorials for overcoming instructors' and learners' incompetencies.  Web-conferencing software negatively impacted an ability for learners and the instructor to view each other (Bower, 2011).
            When learning sessions are underway, Bower (2011) recommended that instructors run two computers, which facilitates improved session outcomes.  Instructors can scan everyone's activities immediately, and make technical adjustments quickly while instruction is underway.  Bower (2011) recommended that schools ensure that learners receive the technical training required prior to classes beginning not only because learning improves significantly and empowers students but also because learners expect that learning technologies will be at least the same as they currently use and more. 
Demographics and Motivations for Faculty Adoption of a Learning Management System
            Understanding the critical elements of academically grounded research contributes to the adoption and application of best practices.  In anticipation of teaching post-secondary online students in business or education curricula, staying up-to-date on evolving learning technologies, strategies, and applications requires expertise in evaluating peer-reviewed literature.  Consequently, a review follows that discusses research methodologies pertinent to the aforementioned specialization.
The Research Problem, Questions, or Hypotheses
            A study by Gautreau (2011) investigated the motivational factors affecting an instructor's implementation of an online learning management system (LMS).  Gautreau's (2011) literature review revealed that many universities fell short in training learning technologies to faculty members.  Gautreau (2011) was adamant that institutes need to provide a strong system of effective motivational strategies for instructors' training in LMSs.  Gautreau's (2011) two research questions asked the degree that demographics related to instructors' motivations to implement an LMS and the rank of motivational elements as instructors adopted LMSs.
The Research Purpose
            Gautreau (2011) indicated that his study's purpose was to demarcate motivational issues related to concessions instructors accept prior to adopting LMSs.  In addition, Gautreau's (2011) study's intention was to review face-to-face and online instructors' demographics to clarify and relate any motivational elements involved in initiating the use of an LMS.  Adding a study that consistently identified the effects of motivational factors with demographics would replace weaknesses in existing literature. 
The Type of Design and Elements of the Design
            Gautreau (2011) used the motivation hygiene theory (MHT) to influence the test instrument's design, which evaluated two research questions using a needs assessment.  The MHTs underpinning enabled the identification of motivators.  The first research question included studying demographical elements such as "gender, age, tenure status, department, and years of experience using an LMS" (Gautreau, 2011, p. 2).   The second research question explored the intrinsic and extrinsic factors incurred by instructors.  The sample included 42 tenured and tenure track College of Communications' instructors at a public California university (Gautreau, 2011).  The operational construct was a self-evaluation survey instrument.  An assessment of demographic information by a chi-square test of independence used a five-point Likert scale ranging from "strongly agree…to strongly disagree" (Gautreau, 2011, p. 10).
Details Regarding Threats to Validity and How Resolved
            While Gautreau (2011) did not mention any threats to validity, there was an indication that the study's sample of 42 instructors was small and sourced from only one university's department, both of which could affect confidence.  However, Gautreau (2011) did not resolve the two aforementioned threats.  Instead, Gautreau (2011) noted that a future study could include an instructor participant from more colleges and that the accuracy of data collection could be amplified with the use of a survey.  Gautreau's (2011) conclusion that "tenure status, level of experience with an LMS, and level of experience with computers" (p. 11) impacting an instructor's willingness to use an LMS appears correct due to these three variables having significance (p<0 .05=".05" i="i">).
             Although other researchers investigated several more factors that affect motivation in using an LMS, Gautreau (2011) indicated that because the other studies addressed different factors thereby causing "uniformity" (p. 10) issues, his study limited the number of factors investigated.  Gautreau (2011) addressed uniformity issues by using random selection.  Rather than narrowing the study, Gautreau's (2011) study could have included research elements presented by other researchers.  By including the other elements, Gautreau (2011) might have provided stronger evidence to support his findings.
Findings from Research and Implications
            Gautreau's (2011) study found that participants' age and gender did not impact the choice of selecting an LMS.  However, as noted above, an instructor's "tenure status, level of experience with an LMS, and level of experience with computers" (Gautreau, 2011, p. 11) did impact an instructor's willingness to use an LMS.  Analysis of each of the three elements resulted in a significant relationship (p<0 .05=".05" i="i">) to using an LMSs (Gautreau, 2011). 
            Gautreau's (2011) first research question asked if demographics affected motivation toward the use of an LMS.  Strangely, Gautreau (2011) noted that untenured instructors chose to use e-tools to improve student learning but did not mention results concerning tenured instructors.  Gautreau's (2011) output table, which was not well formatted, disclosed a sample of 28 men and 13 women and the participants were 51% tenured and 49% untenured.   Fifty-seven per cent of the sample had more than five years of experience with an LMS, 31% had two to five years, and 12% had fewer than two years. 
            While Gautreau (2011) wrote that his study's data was consistent with others' research, which showed tenure status determined the use of technology resources, he did not display any supportive data.  For example, Gautreau's (2011) output table showed 51% tenured and 49% untenured.  However, the table did not indicate the per cent of tenured and untenured as this factor related to per cent of LMS experience.  The output table's data should have differentiated years of LMS experience between tenured and untenured. 
            Gautreau (2011) also confounded his findings because years of experience with an LMS was not the same as a determinant that technology resources would be used, which is futuristic.  Due to the apparent problems with Gautreau's  (2011) analysis, his claim of consistency was ungrounded and his data should be more transparent.  Therefore, Gautreau (2011) did not support his first research question.   
            Gautreau's (2011) second research question addressed the ranked order of motivators associated with adopting an LMS.  Gautreau's (2011) study determined that the ranked order of motivators from greatest to least was "salary, responsibility, achievement, advancement, company policy and administration, the work itself, and recognition" (p. 12).  Gautreau (2011) made one suggestion for increasing the degree of motivation for salary, responsibility, and achievement but stopped after these motivators.
            If Gautreau (2011) had suggestions for increasing each of the motivators, the study would have greater impact.  Gautreau (2011) even noted that all motivators should be considered relevant when he postulated that rankings would "fluctuate depending upon several variables" (p. 15).  Furthermore, Gautreau (2011) decided to recommend that instructors' technological proficiencies be addressed by a developmental program.  Linking proficiency to years of experience was not addressed, and the association was unsupported by data or identified as a motivator.  
A Research Study on Users' Satisfaction with E-Learning 
            Understanding the critical elements of academically grounded research contributes to the adoption and application of best practices.  In anticipation of teaching post-secondary online students in business or education curricula, staying up-to-date on evolving learning technologies, strategies, and applications requires expertise in evaluating peer-reviewed literature.  Consequently, a review of research methodologies pertinent to the aforementioned specialization follows. 
The Research Problem, Questions, or Hypotheses
            Chen and Lin (2008) presented an analysis of users' satisfaction with e-learning using a negative critical incidents approach.  Ensuring that e-learning programs achieve the targeted learning outcomes, the research by Chen and Lin (2008) investigated supportive mechanisms for learner satisfaction.   An association between frequency of negative critical incidents (FNCI), attribute-specific cumulative satisfaction (ASCS), and overall cumulative satisfaction (OCS) in the model used by Chen and Lin (2008) was grounded in the expectancy disconfirmation theory. 
            The association noted above was reflected by the researchers' three hypotheses, which were (1) "attribute-specific cumulative satisfaction for e-learning is directly, and negatively affected by the frequency of negative critical incidents in e-learning" (Chen & Lin, 2008, p. 117), (2) "overall cumulative satisfaction for e-learning is indirectly and negatively affected by the frequency of negative critical incidents through attribute-specific cumulative satisfaction with e-learning" (p. 117), and (3) "overall cumulative satisfaction for e-learning is indirectly and negatively affected by the frequency of negative critical incidents through attribute-specific cumulative satisfaction with e-learning" (p. 117).  Chen and Lin (2008) anticipated that investigating the three hypotheses would clarify learners' satisfaction levels.  
The Research Purpose
            The purpose of the study by Chen and Lin (2008) sought to validate an e-learning satisfaction assessment model.  Consequently, the research team chose the Satisfaction Assessment from Frequency of e-Learning (SAFE) model.  Specifically, SAFE measured "negative critical incidents for e-learning" (Chen & Lin, 2008, p. 117). 
The Type of Design and Elements of the Design
            The University of Taiwan was the study's location.  The results of a pilot study involving 51 online students (67% male and 33% female) aided the research team in improving an "anonymous questionnaire survey" (Chen & Lin, 2008, p. 118) prior to the main study.  The questionnaire evaluated students' opinions about a variety of functions within the school's e-learning system.  Two-hundred and sixty-three online Master's students were required to undergo an examination at the campus.  The revised questionnaire was distributed at the examination by the research team, and the team collected 240 completed questionnaires.
            The questionnaire's seven-item Likert scale included categories such as "administration, functionality, instruction, and interaction" (Chen & Lin, 2008, p. 118).  The assessment model, SAFE, was measured using normed Chi-square, and "all fit-indices indicated that the model was a good fit for e-learning" (Chen & Lin, 2008, p. 122).  Constructs were administration and functionality, and there were eight inter-constructs.
            Validation of the model tested hypothesized associations (Chen & Lin, 2008), and LISREL 8.3 software analyzed the data.  An overall mean cumulative learner satisfaction was 5.68, which indicated that learners' satisfaction with e-learning approaches ranged between satisfied and very satisfied (Chen & Lin, 2008).  The ASCS analysis showed learners' satisfaction ranged between "no comment and satisfied" (Chen & Lin, 2008, p. 122).  The FNCI analysis showed a range between "sometimes and often" (Chen & Lin, 2008, p. 122).  
            Standardized regression coefficients evaluated causal hypotheses, and showed significance at the 0.01 level (Chen & Lin, 2008).  Administration, functionality, instruction, and interaction significantly impacted learner satisfaction (Chen & Lin, 2008).  Interaction had the most impact, which meant that improving interactions would result in the largest gain in overall learner satisfaction (Chen & Lin, 2008).  
Details Regarding Threats to Validity and How Resolved
            An inspection checked the instrument 's composite reliability and construct validity (Chen & Lin, 2008).  Due to the composite reliabilities being above the suggested threshold of 0.6, and the confirmatory factor analysis of the study's constructs above 0.6, convergent validity corroborated the measurement model (Chen & Lin, 2008).
            In addition, construct validity of the "average variance extracted (AVE)" (Chen & Lin, 2008, p. 121) was affirmed at the recommended exhibit estimate of 0.5.  Administration and functionality ranged from just below 0.5 and 0.9, and underwent further review (Chen & Lin, 2008).   Correlations between construct pairs were below the recommended 0.9 cutoff (Chen & Lin, 2008).  Therefore, "distinctness in construct content or discriminate validity" (Chen & Lin, 2008, p. 121) was achieved.  Chen and Lin (2008) produced a research project that would compel peers to accept their findings due to the reliability of the methodology used.
Findings from Research and Implications
            As noted above, the study results by Chen and Lin (2008) disclosed that, "Inter-construct correlations were below 0.9" (p. 121) and implied distinctness in both construct content and discriminate validity.  In this study, H1 was sustained because e-learning satisfaction did have a direct and negative impact by the frequency of negative critical e-learning incidents (Chen & Lin, 2008).  Also, H2 and H3 were supported because administration, functionality, instruction, and interaction had only indirect impact on overall e-learner satisfaction (Chen & Lin, 2008). 
            Other findings included that the success of e-learning programs depends upon sufficient reflection by administration, a functioning e-learning system, and a course of action that supports the "instructional process and interaction among participants and the instructor" (Chen & Lin, 2008, p. 124).  The study revealed other significant elements impacting e-learner satisfaction, which included e-learner collaborations, the discussion board technology and interactions, and limited office hours.  Chen and Lin (2008) recommended that instructors improve e-learner interactivity by creating curricula that enhances communications.  Employing assessment models like SAFE that can assess positive and negative effects on learner satisfaction rather than using models designed to only collect positive effects affords researchers with more unbiased information (Chen & Lin, 2008).  By using the SAFE model, which achieved 71% explanatory power over 49% in models used by other researchers, a study's results would more effectively support recommendations that improve e-learners' satisfaction levels (Chen & Lin, 2008).
A Research Study on Learning in Technology-Mediated Distributed Teams
            Understanding the critical elements of academically grounded research contributes to the adoption and application of best practices.  In anticipation of teaching post-secondary online students in business or education curricula, staying up-to-date on evolving learning technologies, strategies, and applications requires expertise in evaluating peer-reviewed literature.  Consequently, a review of research methodologies pertinent to the aforementioned specialization follows. 
The Research Problem, Questions, or Hypotheses
            Andres and Shipps (2010) conducted a study to discover if learning could be improved by using e-tools that facilitated project-based learning for distributed teams.  Research questions included, "How does technology-mediated collaboration impact team learning behaviors?" (Andres & Shipps, 2010, p. 213) and "Does team learning involve both a technical and social process?" (p. 213).  Andres and Shipps (2010) decided that using a direct observation study approach could improve the capture of actual real-time behaviors, and alleviate issues affected by the lack of participants to recall information.  Hypotheses determined by Andres and Shipps (2010) included:
            (a) Groups working in a face-to-face collaboration setting should exhibit more effective          team learning behaviors than in a technology-mediated setting, (b) positive association exists between team learning behaviors and team productivity, and (c) positive associations exist between team learning behaviors and the quality of team interactions.  (p. 215)
The Research Purpose
            The purpose of a study by Andres and Shipps (2010) was to contribute to current research about the consequences of project-based team collaboration in a technology-mediated learning environment.  To identify the consequences, an assessment was conducted of the roles that a team's learning behaviors exhibited upon task outcomes.  The study also meant to observe the  associations between technology, learning, and sociology in "technology-mediated distributed teams" (Andres & Shipps, 2010, p. 213).
The Type of Design and Elements of the Design
        The research design by Andres and Shipps (2010) used direct observation and employed an "empirical interpretive research approach" (p. 215).  The approach by Andres and Shipps (2010) helped to, "Interpret, evaluate and rate observable manifested behaviors and qualitative content associated with project-based team learning" (p. 215).  Application of two learning theories included the theory of affordances, and the social impact theory. 
            Andres and Shipps (2010) posited that the two theories would be useful in creating a model in which "collocated vs. non-collocated and videoconferencing supported" learning might define the values associated with a learning environment.  Such values were believed to include evolution of critical thinking, and collective methods that impact team-building (Andres & Shipps, 2010).  Trained observation staff ranked "task-related and affect-related" (Andres & Shipps, 2010, p. 213) dialogues between learners.  The research team performed a pretest of the research model and hypotheses, but no results were provided nor references made to any needed modification.
            Andres and Shipps (2010) used a sample of 48 undergraduate students from a management information systems (MIS) program.  Studies had shown that students at similar programming levels exhibited similar abilities as professional programmers if the "problem domain (was) well understood" (Andres & Shipps, 2010, p. 216).  The skill sets of such learners in MIS work with moderately complex programming.  Each participant received extra credit, and the team achieving the highest level of productivity in each experiment was promised a small monetary award.  The participants' assignment, which was to improve the performance of a hypothetical college's MIS department, required partitioning into teams.  The teams' instructions included creating documentation for a software design within two and one-half hours.  Each team received half of the assignment's instructions, and were to share collaboratively with the other teams (Andres & Shipps, 2010).
            Andres and Shipps (2010) employed the behavioral observation approach to assess teams. A relevant degree of training-the-observer instruction ensued with an emphasis on fully comprehending the construct definitions and the applicable behavioral indicators related to team learning (Andres & Shipps, 2010).  Collection of the overall team ratings occurred midway into the appointed assignment time, and just before ending the sessions (Andres & Shipps, 2010).  Literature reviewed by Andres & Shipps (2010) resulted in a five-item rating scale, and use of a seven-point Likert scale.  Agreement index between raters was very high. 
Details Regarding Threats to Validity and How Resolved
            Andres and Shipps (2010) examined "internal consistency reliability, convergent validity, and discriminant validity of the construct measurements (and calculated) the construct’s composite reliabilities (CR) and the average variance extracted (AVE)" (p. 217).  Both constructs' reliability scores were above the 0.70 benchmark.  The t-statistic loadings demonstrated internal reliability and item convergent validity (Andres & Shipps, 2010).  The AVE square roots confirmed discriminant validity via correlation between team learning and "interaction quality latent variables" (Andres & Shipps, 2010, p. 217).  Consequently, the "measurement model displayed discriminant validity" (Andres & Shipps, 2010, p. 217), and the agreement index between raters was very high.  Andres and Shipps (2010) produced a research project that compels peers to accept their findings due to the validity anchored in their procedures, and the reliability of the methodology used.
            Andres and Shipps (2010) made no mention of obtaining approval through an Institutional Review Board (IRB) for their study.  If approval was not sought and approved, the study by Andres and Shipps (2010) would be non-compliant.  Non-compliance also "compromises the integrity and validity of the research" (U.S. Department of Health and Human Services, 2006, p. 5).  More important than the loss of potential contribution from the study would be the potential harm to the participants caused by confidentiality issues and so forth.  In addition, Andres and Shipps (2010) noted that a payment and course credits were to be provided to participants.  If IRB approval was not gained prior to the study, such incentives could be construed as undue inducement (U.S. Department of Health and Human Services, 2006).  If an IRB application was not done because the researchers believed that their study was exempt cannot be assumed since there was no mention of an IRB.   
Findings from Research and Implications
            The results of the study by Andres and Shipps (2010) revealed that the "collaboration mode can impact team information exchange…interpretation, and task outcomes" (p. 213).  In addition, the greater the success of a team's collaboration, the more successful are the resulting social structures, which achieves significant task outcomes (Andres & Shipps, 2010).  A partial least squares (PLS) analysis supported the hypotheses.  As collaboration increased team learning so did productivity and quality of the teams' interactions.  Apparently, working on tasks face-to-face rather than through online collaboration resulted in fewer problems with "communication breakdowns, misunderstandings, and difficulty moving forward with task execution" (Andres & Shipps, 2010, p. 219).
           Andres and Shipps (2010) reported a consensus that technology-mediated collaboration included contexts of both technological and societal elements.  Technological elements enable an execution of procedural and technical tasks.  Societal elements allow for psychological task accomplishments.  Therefore, both the technological and societal elements provide the most impact for teams to participate, cooperate, and reflect (Andres & Shipps, 2010).
A Research Study on Promotion of Cross-Cultural Awareness in Preservice Instructors
            Understanding the critical elements of academically grounded research contributes to the adoption and application of best practices.  In anticipation of teaching post-secondary online students in business or education curricula, staying up-to-date on evolving learning technologies, strategies, and applications requires expertise in evaluating peer-reviewed literature.  Consequently, a review of research methodologies pertinent to the aforementioned specialization follows. 
The Research Problem, Questions, or Hypotheses
            Malewski, Sharma, and Phillion (2012) presented a study using preservice instructors as a means to examine endorsement of cross-cultural awareness.  Research questions included
(1) "How do international field experiences prepare preservice teachers to teach in diverse settings?, (2) What are the pedagogical implications of increased cultural awareness among preservice teachers for classroom practice?, and (3) How do international field experiences open preservice teachers to future opportunities to explore and work in culturally diverse communities?" (Malewski, Sharma, & Phillion, 2012, p. 1).
The Research Purpose
            The research purpose noted by Malewski, Sharma, and Phillion (2012) was to address important cross-cultural concerns relative to preparing preservice teachers to practice in "culturally and linguistically diverse classrooms" (p. 1).  Malewski et al. (2012) expected that the study would include an international trip.  Furthermore, Malewski et al. (2012) wanted to document each participant's cross-cultural connection with and awareness of cultural information.
The Type of Design and Elements of the Design
            Malewski, Sharma, and Phillion (2012) conducted their qualitative collective case study using 49 preservice instructors from an American university who were participating in a Honduran field experience, which placed them in rural and urban school systems.  Sample demographics included 37 females and 12 males who were 97% white and 3% biracial.  Only 3.5% spoke Spanish fluently or with limited skill.  Data was gathered from "questionnaires, interviews, focus interviews, course assignments, discussions, journal reflections, and researchers’ observations and field notes" (Malewski, Sharma, & Phillion, 2012, p. 2).  Multiple data sources required triangulation in order to ensure accuracy and reliability before preparing and disclosing the study's findings.  Participants were given instructions to use "reflective journaling, autobiographical writing, teacher portraits, and critical analysis of pedagogical issues" (Malewski et al., 2012, p. 5).  
            Malewski, Sharma, and Phillion (2012) focused their study's design upon a previous researcher's work that explored the efficacy of a collective case research structure.  The structure included (a) an extraordinary singular experience that required a collective investigation of a "group of persons, places, events, issues, or problems" (Malewski, Sharma, & Phillion, 2012, p. 11), (b) an investigation taking place at the location of the experience, (c) a collection of extensive content-rich information from various data sources, (d) the replication of results across cases, and (e) an emphasis on the differences within and between cases at any possible opportunity.  Interviews with the participants before and after the trip assessed their knowledge of culture, principles, and personal outlooks, which provided a picture of each participant's experiential learning. 
            Several themes materialized demonstrating mounting awareness about the impact of language on attempts to grasp other cultures.  For example, themes arose about the relevance of economic conditions' effects on a culture's ability to provide education to its people and the impact upon pedagogical comprehension related to differing degrees of understanding about cultures.  Other themes recognized the value for preservice teachers to receive exposure to other cultures and the importance that other cultures play in understanding one's own cultural viewpoints (Malewski, Sharma, & Phillion, 2012).
Details Regarding Threats to Validity and How Resolved
            Malewski, Sharma and Phillion (2012) completed a study that reviewed data from a singular location, and indicated that their study added a significant resource for educators despite the one area examined.  Rather than Malewskis et al.'s (2012) study providing scientific support for the advancement of academic knowledge, their study claimed a contribution based upon other researchers' work.  Malewski et al. (2012) appeared to claim study significance because of their first-of-its-kind study.  Uniqueness does not qualify validity.  The study by Malewski et al. (2012) may help peers, but the study was uncompelling due to the issues noted above.  
            Malewski, Sharma, and Phillion (2012) explained that their study revealed themes as noted above, which signified a collective case.  However, the study did not address atypical cases.  Researchers who objectively or subjectively discount cases not fitting a research parameter may substantially discredit or eliminate data that would otherwise reveal important information such as from atypical cases. 
            In addition, researchers must be very wary of personal opinions affecting their data analysis.  Compromising research validity occurs if researchers' inductive processes incorrectly eliminates opportunities to discover good or bad findings.  Once peers review research, and detect that possible useful data was not included in a study, the study's researchers may elicit a future wherein few pay attention.  Adhering to scientific methods eliminates such problems. 
Findings from Research and Implications
            Conclusions by Malewski, Sharma, and Phillion (2012) indicated that, "Experiential learning in an international setting was key to developing preservice teachers’ cross-cultural awareness" (p. 2).  Expanding cultural awareness provides preservice teachers with an otherwise unobtainable understanding and an ability to successfully instruct culturally diverse learners (Malewski, Sharma, & Phillion, 2012).  Sensitivity to others' cultures enables preservice instructors with opportunities to form skills in negotiating, interpreting, and participating with people, which enables such skills to more effectively instruct learners.
            Malewski, Sharma, and Phillion (2012) indicated that preservice students challenged the "omissions, biases, and inclusions that form an ability to conceive of cultural manifestations" (p. 35), which includes "assumptions, values, beliefs, and attitudes" (p. 35).  Cross-cultural awareness expands when learners actively interact with content and cultural information.  In addition, expertise of culturally diverse learners increases due to exposure to theories and applications (Malewski, Sharma, & Phillion, 2012).  Apparently, other researchers have not addressed the advantages of engaging with other cultures as a means to increase preservice teachers' cross-cultural awareness.  Consequently, the study by Malewski et al. (2012) advances knowledge that improves educational programs for teaching students.  Supporting international field experiences provides preservice teachers with experience and cross-cultural skills for teaching diverse students (Malewski et al., 2012).  Such skills tremendously impact better learning outcomes as preservice teachers begin practicing instruction of widely diverse students.
A Compelling Case Related to the Significance of Findings
            This section redesigns the DBR methodology presented by Bower (2011) above.  Educators must be extremely cognizant of the methods used in academic research because instructional practices based on faulty, incomplete, or less than ideal research methods can be extremely detrimental to learning outcomes.  For example, Akilli (2008) wrote that "many DBR studies lack a sound theoretical foundation, and do not add to the literature to refine and develop the theory" (p. 3).  Some DBR researchers target a solution, and pursue learning issues that pertain to the solution, which often causes "under-conceptualized research" (Akilli, 2008, p. 3).  Barab and Squire (2004) remarked that rather than identifying hypotheses, DBR looks at "multiple aspects of the design, and develops a profile that characterizes the design in practice" (p. 4).  However, Akilli (2008) noted that DBR researchers have used mixed methods to assess interventions' outcomes, and revised interventions accordingly. 
            Mixed methods warrants "objectivity, validity, credibility, and applicability of the findings" (Akilli, 2008, p. 3) due to measurements between data.  Bower (2011) focused upon "characterizing situations (as opposed to controlling variables) (and) developing a profile or theory that characterizes the design in practice (as opposed to simply testing hypotheses)" (Barab & Squire, 2004, p. 3).  Rather than considering a DBR design, and implementation of instructional tools based only upon Bower's (2011) theory, DBR is merged with a mixed methods approach.  The mixed method will positively impact the integrity of the new design because extending its scope will overcome the "minimal ontology" (Barab & Squire, 2004, p. 5) that DBR investigations typically represent.  Reduced doubt concerning reliability issues, and increased confidence provides the researcher with more assurance in offering the upgraded study to peers.
The Research Problem, Questions, or Hypotheses
            Although distance education and e-learning environments began decades ago, and technologies for online learning are vast, paradigms for best practices used in andragogical learning continue to shift without the benefit of sound research (Skylar, 2009).  Researchers have posited that improved learning outcomes might be achieved by synchronous and asynchronous web conferencing (Bower, 2011).  However, scientific evidence prevails over theory (Skylar, 2009).  Barab and Squire (2004) wrote that convincing others that recommendations based upon DBR are trustworthy, valid, and reliable is achievable by applying a qualitative research method.  Scientific research augments current theories regarding the benefit of web conferencing for e-learners.  By converging DBR with a qualitative research method, assertions are supportable, and limitations dramatically reduced.  The research questions include (1) Does synchronous and asynchronous web conferencing affect e-learners' performance?, and (2) Does synchronous and asynchronous web conferencing affect e-learners' satisfaction?  The study's four hypotheses are:
            H1: Employing web conferencing in a synchronous e-learning environment improves                     learning performance.
            H2:  Employing web conferencing in an asynchronous e-learning environment improves                     learning performance.
            H3:  Employing web conferencing in a synchronous e-learning environment increases                     learners' satisfaction. 
            H4:  Employing web conferencing in an asynchronous e-learning environment increases                     learners' satisfaction.
The Research Purpose
            The purpose of the mixed methods research project is to determine if, during synchronous and asynchronous web conferencing, learners' performance and satisfaction does or does not improve.  Literature revealed that many DBR researchers have disregarded electing a mixed methods research design.  This project provides an opportunity for DBR researchers to examine the research methods discussed below, which unlike the DBR method, supports "objectivity, validity, credibility, and applicability of the findings" (Akilli, 2008, p. 3) due to data convergence. 
The Type of Design and Elements of the Design
            Intra-method mixing using concurrent analysis of the qualitative data from questionnaires and interviewing ensures data convergence (Akilli, 2008).  Ninety online students from an undergraduate marketing class at three state-administered universities in Syracuse, New York, represent the study's sample.  The design of four six-week classes running concurrently are designated as Class One and Class Two.  Fifteen students participate from each university alternating one time between both classes.  Classes consist of a synchronous web conferencing learning platform, and an asynchronous web conferencing learning platform.  Examining data from the two classes should demonstrate performance and satisfaction related to each learning scenario if the pretest and posttest data is compared separately for participant learners and researchers.
            An IRB application is to be submitted at the researcher's university, which is one of the three state-run universities noted above before any contact is made with the participants.  Two research peers from the other two universities are contributing members of the research team.  All participants will be advised about the study's parameters after the IRB is approved.
            All students are registered, and log into the portal for both online courses at the appropriate time.  The management information services department who maintains the server for the three colleges and the research team pretest the two courses.  The asynchronous class uses a WebCT course management system to collect data from a pretest and posttest using Scantron technologies to score (a) learners' and researcher participants' proficiencies, and (b) an end-of-class student satisfaction survey that uses open and closed questions.  The synchronous class uses the Elluminate Live course management system. The research team also conducts an Elluminate Live pretest to assess students' proficiencies, and each student undergoes a posttest and satisfaction survey.   
            The research team is to assess and code the survey's open-ended questions.  Thirty per cent of the tests will be randomly chosen and manually scored to ensure the reliability of the scores.  Scores from the two learning environments' pretest will be compared to posttest scores to assess changes in proficiency of the participant learners and researchers.  The scores from the satisfaction survey will compare the learners' levels of approval.  IBMs SPSS software was chosen to run all data for the concurrent analysis.  
            The study's procedures will provide a repetitive series of "theory-based analysis, design, and testing" (Bower, 2011, p. 67).  The "naturalistic contexts" (Luo, 2011, p. 5) extracted from the study triangulates the DBR information with the formative evaluative qualitative data potentially resulting in a new learning model.  Capturing the operational variables from the data collected in both classes, and from the participant researchers provides an effect range between the students' and researchers' competencies.  Multiple dependent variables include "climate variables (e.g., collaboration among learners, available resources), outcome variables (e.g., learning of content, transfer), and system variables (e.g., dissemination, sustainability)" (Barab & Squire, 2004, p. 4).
            Amiel and Reeves (2008) wrote that design-based research in combination with qualitative research produces "evidence-based outcomes" (p. 37) because the convergence results in engaged research.  Therefore, engaged research produces evidence-based outcomes.  Answering the research questions will be possible because analysis of the data described above provides an ability to accept or not accept each hypothesis. 
            Luo (2011) wrote that "relevant and quality research on educational technology must do more than simply present empirical findings on how well a technology application worked, but should also be able to interpret why it worked" (p. 3).  Achieving a contribution to learning is possible due to (a) a much more robust research methodology than DBR alone because a mixed methods research design will be employed, (b) designing assessments to provide the research team with the reasons that each learning platform was beneficial to learning and satisfaction (process to outcome analysis), (c) the research methodology supporting the validity of the findings, and (d) data interpretations that will focus upon explaining why the synchronous and asynchronous LMSs improved or did not improve proficienies.  In addition, data from both classes will support recommendations to improve learning, instructor training, and satisfaction.    
Conclusion
            The essay prepared above presented two sections.  The first section examined empirical research methodologies as well as the conclusions presented by scholarly researchers.  The second section presented a design to converge the qualitative research method with DBR, which was discussed by Bower (2011).  Analyzing the research methods in the first section, and designing a research project in the second section provided an opportunity to apply new research information.  The analyses improved upon research skills required for future projects, and a more developed ability to evaluate best practices before implementing them into instructional strategies.  

References:

Akilli, G.K. (2008, February). Design-based research vs. mixed methods: The differences and commonalities. Retrieved from http://it.coe.uga.edu/itforum/paper110/Akilli_DBR            _vs_MM_ITForum.pdf

Amiel, T., & Reeves, T.C. (2008). Design-based research and educational technology: Rethinking technology and the research agenda. Educational Technology & Society, 11 (4), 29–40. Retrieved from http://ifets.info/journals/11_4/3.pdf

Andres, H.P., & Shipps, B.P. (2010, Summer). Team learning in technology-mediated distributed teams. Journal of Information Systems Education, (21)2, 213-221. Retrieved from http://jise.org/Volume21/21-2/Pdf/vol21-2-pg213.pdf

Barab, S., & Squire, K. (2004). Design-based research: putting a stake in the ground. The Journal of the Learning Sciences, 13(1), 1–14. Retrieved from http://learnlab.org/            research/wiki/images/a/ab/2004_Barab_Squire.pdf

Bower, M. (2011, May). Synchronous collaboration competencies in web-conferencing environments - their impact on the learning process. fDistance Education, (32)1, 63-83. doi: 10.1080/01587919.2011.565502


Gautreau, C. (2011, January). Motivational factors affecting the integration of a learning management system by faculty. Journal of Educators Online, (8)1, 1-25. Retrieved from 

Luo, H. (2011). Qualitative research on educational technology: Philosophies, methods and challenges. International Journal Of Education, (3)2, e13. doi:10.5296/ije.v3i2.857

Malewski, E., Sharma, S., & Phillion, J. (2012). How international field experiences promote cross-cultural awareness in preservice teachers through experiential learning: Findings from a six-year collective case study. Teachers College Record, (114)8, 1-44. Retrieved from http://www.tcrecord.org.proxy1.ncu.edu/library

Sandoval, W. & Bell, P. (2004).  Design-based research methods for studying learning in context: Introduction. Educational Psychologist, (39)4, 199–201. Retrieved from http://www.lopezlearning.net/files/14963084Sandoval-Bell_Article-1.pdf

Skylar, A.A. (2009, Fall). A comparison of asynchronous online text-based lectures and synchronous interactive web conferencing lectures. Issues in Teacher Education, (18)2, 69-84. Retrieved from http://www1.chapman.edu/ITE/public_html/ITEFall09/09skylar.pdf Press the Escape key to close
 
 
Tanner, J.R., Noser, T.C., & Totaro, M.W. (Spring 2009): Business faculty and undergraduate students' perceptions of online learning: A comparative study. Journal of Information Systems Education, (20)1, 29-40. Retrieved from http://search.proquest.com.proxy1. ncu.edu/docview/200167163?accountid=28180
 
U.S. Department of Health and Human Services. (2006, March). Guidance for clinical trial sponsors: Establishment and operation of clinical trial data monitoring committees. Retrieved from http://www.fda.gov/downloads/RegulatoryInformation/Guidances/ucm127073.pdf