Search This Blog

Saturday, March 10, 2012

Best Practices Designing Research Purpose & Questions

Best Practices for Devising a Research Purpose Statement and Questions

The purpose of this paper is to demonstrate how Research Purpose Statements and Research Questions can be improved using suggestions provided by Creswell (2008), Aveyard (2007), and others. Synthesizing the many sections of a dissertation by novice researchers requires the ability to identify a research problem. Learning "how to properly construct and develop logical argumentation for a problem statement" (Ellis and Levy, 2008, p. 19) provides the doctoral student with additional skills over time.

Background of Nardone's Dissertation

Nardone (2009) submitted a dissertation entitled "Reputation in America’s graduate schools of education: A study of the perceptions and influences of graduate school of education deans and school superintendents regarding U.S. News & World Report’s Ranking of “Top Education Programs”. The purpose of the study was to "explore the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs). The respondents represented two unique stakeholder groups for Graduate Schools of Education: GSOE deans and school superintendents" (Nardone, 2009, p. 6).

Research Purpose Statement Revised

Appropriately critiquing a doctoral-level paper requires an understanding of what defines a Research Purpose Statement (RPS). Creswell (2008) provides such a definition by writing that the RPS "advances the overall direction or focus for the study" albeit quantitative and/or qualitative studies, and consists of one or two well formed sentences. The RPS quite often lies within the Statement of the Problem section, and is frequently placed at the Introduction's end.

Nardone's (2009) dissertation stated that the RPS was to "explore the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs)" (p. 6). Although Nardone's (2009) RPS was stated in two sentences, a more profound problem exists with the beginning of her RPS sentence because she had previously indicated that her paper would address qualitative and quantitative research. To accommodate both types, Creswell (2008) pointed out that a quantitative study's RPS states that "the purpose of this study is to examine the relationship…" (p. 121) while a qualitative RPS states that "the purpose of the study is to explore…" (p. 121).

As noted above, Nardone (2009) only used "explore" to signify her study's intention (quantitative) when she should have used words to reference both qualitative and quantitative. Not adjusting her RPS to accommodate both types is confusing to readers. An improved RPS would address that the purpose of the study was to examine the relationship between, and explore "the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs)" (Nardone, 2009, p. 6).

Research Questions Revised

The research questions presented in a dissertation by Nardone (2009) included "three major research questions, and related sub-questions. One objective of the research is to identify, and quantify, the actual role that the reputational survey plays (based on respondents’ scores) in the USNWR GSOE rankings" (p. 8). Consequently, "prior research of the undergraduate rankings indicates that the reputational aspect significantly drives the overall ranking of the institutions" (Nardone, 2009, p. 8).

Q1. "What is the significance of the reputational survey in U.S. News & World Report’s annual ranking of Graduate Schools of Education (GSOEs)?" (Nardone, 2009, p. 8). Exploring the behaviors and perceptions of the survey respondents—the GSOE deans and school superintendents—is another objective of the research. Nardone (2009) explains that:

Research explores the respondents' perceptions about the GSOE rankings themselves, in terms of what purpose the rankings might serve, and their perceptions about the reputational survey component of these rankings. More specifically, the study aims to understand their level of awareness of the reputational survey, their understanding of their impact on the rankings, their level of responsiveness to the survey, and their methods and approach to responding to the survey. Why do they, or do they not, respond to the survey? Do they personally respond to the survey? Do they consult with other colleagues? An important emphasis of the research will be on exploring the differences in perceptions and behaviors between these two stakeholder groups. These objectives are captured in the second research question. (p. 8).

Q2. "How do GSOE deans and school superintendents differ in their perceptions about, responsiveness to, approach to, and behavior regarding, the reputational survey in U.S. News & World Report’s annual ranking of GSOEs?" (Nardone, 2009, p. 9). Nardone (2009) states:

Finally, this research will explore the concept of reputation with these survey respondents. The literature indicates that reputation is generally conceptualized as either prominence or as perceived quality. This research asks the two stakeholder groups what forms the basis of their rating of institutions when responding to the USNWR survey. Do they consider the quality of the program graduates? Do they consider the quality and production (output) of faculty research? Do they consider the glossy promotional materials that cross their desk? Do they consider the level of sponsored research? Do they consider student selectivity? Do they consider the published rankings themselves? This will explore whether this important ranking category captures reputation as either prominence, or perceived quality. Again, an important emphasis is the examination of the differences between the two stakeholder groups. Thus, the third research question. (p. 9).

Q3. "How do these two unique stakeholder groups differ, when rating the GSOEs, in their conceptual definition of reputation—reputation as prominence, or reputation as perceived quality?" (Nardone, 2009, p. 9).

This study does not join the active debate over the best indicators or measures of quality, but instead accepts “reputation” as an asset of value for the university and explores the perceptions and behaviors of two stakeholder groups involved in the rating of academic reputation. (p. 9).

After reviewing Nardone's (2009) research questions several times, and comparing the questions against Nardone's (2009) Research Problem Statement, which was to "explore the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs)" (p. 6), my general impression is that the RPS is not broadly stated enough to encompass all of the research questions. Since one of the goals of the RPS is to be precise, and then suggesting that the RPS is not broad enough, my opinion is that Nardone (2009) is attempting to integrate too many research questions.

To reiterate, the three research questions are: Q1. "What is the significance of the reputational survey in U.S. News & World Report’s annual ranking of Graduate Schools of Education (GSOEs)?" (Nardone, 2009, p. 8); Q2. "How do GSOE deans and school superintendents differ in their perceptions about, responsiveness to, approach to, and behavior regarding, the reputational survey in U.S. News & World Report’s annual ranking of GSOEs?" (Nardone, 2009, p. 9); and, Q3. "How do these two unique stakeholder groups differ, when rating the GSOEs, in their conceptual definition of reputation—reputation as prominence, or reputation as perceived quality?" (Nardone, 2009, p. 9). As mentioned, my impression is that there is some duplication between Q2 and Q3, and this is because both questions ask, "How do GSOE deans and school superintendents differ?".

The list of topics Nardone (2009) has chosen to cover in the research questions is overwhelming. The Research Problem Statement revised as previously discussed is to examine the relationship between, and explore "the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs)" (Nardone, 2009, p. 6). The associated unrevised research questions are: Q1. "What is the significance of the reputational survey in U.S. News & World Report’s annual ranking of Graduate Schools of Education (GSOEs)?" (Nardone, 2009, p. 8); Q2. "How do GSOE deans and school superintendents differ in their perceptions about, responsiveness to, approach to, and behavior regarding, the reputational survey in U.S. News & World Report’s annual ranking of GSOEs?" (Nardone, 2009, p. 9); and, Q3. "How do these two unique stakeholder groups differ, when rating the GSOEs, in their conceptual definition of reputation—reputation as prominence, or reputation as perceived quality?" (Nardone, 2009, p. 9).

After revising the above, the Problem Statement and Research Questions are:
Problem Statement: Explore the perceptions and influences of the respondents to the U.S. News and World Report’s (USNWR) annual reputational survey for Graduate Schools of Education (GSOEs). Questions: 1. What is the significance of the reputational survey? 2. How do GSOE deans and school superintendents differ in their perceptions about, responsiveness to, approach to, and behavior regarding, the reputational survey. 2. How do these two unique stakeholder groups differ, when rating the GSOEs, in their conceptual definition of reputation? To provide a revision of the Research Questions so that consistencies and differences are identified (via meta-ethnography and meta-synthesis) (Aveyard, 2007, p. 108) with the Problem Statement, the final suggested revision is: Q1. How do GSOE deans and school superintendents differ in their perceptions about, responsiveness to, approach to, and behavior regarding, the reputational survey?

Conclusion

The purpose of this paper was two-fold: revising the research purpose statement and revising research questions presented by Nardone's (2009) dissertation entitled, Reputation in America’s graduate schools of education: A study of the perceptions and influences of graduate school of education deans and school superintendents regarding U.S. News & World Report’s Ranking of “Top Education Programs". Coincidentally, the subject of Nardone's (2009) dissertation mirrors one of the dissertation subjects chosen for my doctorate program, which is to research why the ranking of America's institutes of learning continues to fall when compared with global learning institutes. Working on assignments for EDU7002 serves two purposes: submitting the required work for EDU7002 (and other future courses), and assessing literature, which provides excellent opportunities to develop skills in preparing dissertation-level papers in the future.

References

Aveyard, H. (2007). Doing a literature review in health and social care: A practical guide. Great Britain, UK: Open University Press. Retrieved from Northcentral University E-brary.

Creswell, J.W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

Ellis, T.J., & Levy, Y. (2008). Framework of problem-based research: A guide for novice researchers on the development of a research-worthy problem. Informing Science: The International Journal of an Emerging Transdiscipline, (11), p. 17. Retrieved from http://inform.nu/Articles/Vol11/ISJv11p017-033Ellis486.pdf

Nardone, M.S. (2009). Reputation in America’s graduate schools of education: A study of the perceptions and influences of graduate school of education deans and school superintendents regarding U.S. News & World Report’s Ranking of “Top Education Programs”. Dissertation retrieved from Northcentral University's Library's ProQuest.

Assessment Criteria for Learning Events, Assisting Learning, and Rich Online Attributes

This article presents examples of, and the assessment criteria for a training event, the factors that assist learning, the attributes of a previous successful training event, and an evaluation of the characteristics of a rich online learning event. Many of the following comments are sourced from past experiences of training provided by employers. However, the more valuable learning experiences were gained from traditional classrooms, and e-learning occurring during the pursuit of two completed online degrees, which have now supplanted the length of time in face-to-face instruction. E-tools can maintain engagement, but should not be used to the extent that it wastes learners' time. Whether online or offline, the criterion of a rich learning event depends upon how well learning is accomplished, and not how well learning is transmitted.

Assessment Criteria of a Training Event

The criteria used to assess whether an educational or training event was good or bad has over time changed as experiences and expertise during three and one-half decades of work experience grew. Personal and professional experiences as well as professional expertise affected the assessment criteria for a training event as these three aforementioned factors impacted the multi-layered depths of gained wisdom, personal and professional needs, and expected accomplishment anticipated based upon self-efficacy. For example, in the early period of my professional career, a sense of naivety resulted in accepting all information as truth, and of the highest possible quality. However, my assessment criteria modified, and demanded more benefits for the time and expense devoted to the training event. Consequently, understanding grew that gaining knowledge is primarily the learner's responsibility whether in a training event or not.

Palloff and Pratt (2001) indicated that staff who are well trained, know their material, and provide up-to-the-minute information on the topics presented encourages learning. Instructors who show that they have been successful in their profession, recognized by their peers, and credentialed and possibly published, provide a sense of reliability to learners. Being provided an appropriate tool to self-evaluate one's learning, and being able to realize knowledge transferability are critical for learning. Whether training is online or not, when an employer or future employer accepts that completed training is of value, the instructional event provided value.

Factors that Assist Learning

The factors that help me to learn include several features. For example, an instructor who does not reflect a sense of being rushed appeals to an ability for a learner to focus. Motivators for learning include when training aids people to change job types, retrain, or are provided training for getting or retaining jobs (Maeroff, 2003). Learners are additionally motivated when employers demonstrate that the new learning is of value, that the training adds applicability to the job, and to career development. Palloff and Pratt (2001) promoted using e-learning tools for synchronous and asynchronous discussion boards, e-mailing, group work, providing beneficial links, an ability to upload or download graphics and audiovisual elements, and a tracking ability for the learner to have quick access to earlier work. Using technology increases students' learning of the course content, the ability to achieve learning outcomes, and promoting the interactive needs of the student. Learning is assisted when flexibility exists in course authoring, assessment tools are appropriately chosen based on the desired learning outcomes rather than the available technology, there is well based planning by the institution's stakeholders, and accessibility exists for all learners despite cultural, linguistic, physical, or financial differences (Palloff and Pratt, 2001).

Attributes of a Previously Successful Training Event

A previous rich learning experience was successful because the benefits achieved from the trainer and learning materials immediately impacted an improvement of a work skill. For example, when placed in orientation side-by-side with a supervisor, and immediately practicing the information gained using a computer, learning was extremely accelerated. Other attributes from learning events included feeling respected by the instructor and other participants when contributing during training, an ability for speakers to answer questions immediately or shortly thereafter, gaining heightened confidence after speaking in public, having quality technical equipment and learning materials, and learning in a comfortable environment that promoted interaction with colleagues. Palloff and Pratt (2001) recognized that e-learning materials improved learning when applicable for all "learning styles" (p. 50).

Maeroff (2003) indicated that "institutions have tailored courses to the specifications of individual business' needs" (p. 125). Businesses and colleges who are aiding minds to work need to work together in establishing strategies for learner outcomes. Businesses, after all, are more "aggressive innovators" (Maeroff, 2003, p. 124). Learning programs and courses' designs must be developed based upon the needs of employers and their goals, which create customized learning experiences and employability.

Broadbent (2002) presented six levels of learning reminiscent of the types of training incurred in the past: "knowledge, comprehension, application, analysis, synthesis, and evaluation" (p. 113). These levels present successful training because new knowledge is promoted along the six levels during training beginning with new knowledge, and provides learning applicable to using that learning elsewhere at a near or far time. Instructors should accommodate different learning types in order for all to progress through the six learning levels (Broadbent, 2002).

Rich Online Learning Characteristics

The characteristics of learning experiences necessary to provide rich online learning based upon over a decade of online instruction include several attributes. For example, the similarity of instructors to assess work using the same American Psychological Association's (APA), and other academically-designated writing standards provides non-conflicting learning. Learners gain valuable online learning experiences when there is a perceived value for the invested time and money as long as tuition is not increased after a program is begun when learning values remain as assessed before beginning. Rich online learning includes technical aspects that are user-friendly, server stable, easily available for signing-in, and have a provision for posting and retrieving work. In addition, universities with online programs should provide an online directory of major staff and contact information instead of just one person per department. The availability of e-learning class materials and online resources such as a library, learning tools, forms, forums, and so forth enhance online learning. Accreditation must be in order, and availability to credentialing agencies provided. The similarity of required technical software and hardware from class to class promotes user-friendliness, and progressively availing improved e-tools to learning is essential. Responses from instructors and university staff should be within twenty-four hours.

Learning outcomes must be challenging for all students whether a GPA is high or low, and not one-size-fits-all. Classmates' writing needs to be at a level where all can understand the context. Assignments should be aligned with students' situations so when a student has never practiced as a teacher the assignment is not based upon a practicing teacher's experiences. End-of-class assessments need to be responded to when requested. Mentors should provide feedback every time work is submitted due to the expertise provided by them. A rich online experience provides a program's requirements to fit the need of students such as enabling students to take courses needed to learn job goals, or should provide enough flexibility to do so.

Palloff and Pratt (2001) indicated that the development of policies and procedures enables a curriculum to provide increasing skill in critical thinking and analysis as well as peer collaboration. Strategies should be identified cooperatively between administration and faculty addressing the "changing cultural, organizational, economic, and survival issues of the institution" (Palloff and Pratt, 2001, p. 38), and implemented. An obvious "investment in the technical infrastructure needed to support" (Palloff, et al., 2001, p. 42) the implemented strategies is essential. Pedagogical methods should supersede technological attributes, and learning content linked to e-tools that best support them (e.g., provide interactive simulations versus static traditional books). Obvious development of e-learning courses and programs should be provided by the faculty senate and administrators, and a university's departments should use the same quality measurements (Palloff and Pratt, 2001). An effort to retain online students must be exhibited by the university's support systems via a "learner-centered focus" (Palloff and Pratt, 2001, p. 47).

The work completed by students must be appraised by instructors and plagiarism tools to ensure legitimacy to the greatest extent possible (Maeroff, 2003). Broadbent (2002) suggested that online instructors should assimilate effective learning models such as Gagne's or Salmon's who identified five stages in successful e-learning. When e-learning courses are thoughtfully planned, designed, and provided based upon the needs and culture of the provider and learners, a rich online learning event is possible.

Conclusion

This paper presented examples of, and the assessment criteria for a training event, the factors that assist learning, the attributes of a previously successful training event, and an evaluation of the characteristics of a rich online learning event. Keeping in mind the details of the above sections, the most important factor of e-learning is that "interactive learning keeps students energized, and helps participants absorb information and remember it" (Broadbent, 2002, p. 120). Successful online instructors who enable learners to interact can be assured that when the moment of interaction occurs is the moment of when learning occurs.


References

Broadbent, B. (2002). ABCs of e-learning: Reaping the benefits and avoiding the pitfalls. San Francisco: Jossey-Bass/Pfeiffer.

Maeroff, G.I. (2003). A classroom of one: How online learning is changing our schools and colleges. New York, NY: Palgrave Macmillan.

Palloff, R.M., & Pratt, K. (2001). Lessons from the cyberspace classroom: The realities of online teaching. San Francisco: Jossey-Bass/Pfeiffer.

Friday, March 9, 2012

Higher Order Concerns in Academic Writing: Chronology, Order of Importance and Generality

Three Examples of Higher Order Concerns in Writing Academically

1. Chronology

Having been a victim of bullying at a country-based grade school during the 1970s, the subject of bullying continues to intrigue me as it increasingly manifests in today's society to the point of causing suicide. With the ever-increasing use of the social networks on the internet, and a mounting ability for most grade-school students to have a cell phone and personal computer, cyber-bullying is quickly becoming a typical phenomenon. Consequently, empowering grade school children and parents with tools to avoid cyber-bullying has become a common topic in the 21st century.

A recent report by the American Academy of Child & Adolescent Psychiatry (2008) revealed that almost fifty-percent of school children experience bullying, and that experiencing bullying affects academic performance, and the social and emotional development of school children. Empowering grade school children and parents with tools to avoid cyber-bullying requires research into the technical security aspects of e-communications such as provided by cell phones, computers, and other e-tools. While schools can prevent use of e-tools like cell phones and computers during school hours, students and parents carry the responsibility of after-school e-tool use.

2. Order of Importance

Empowering grade school children and parents with tools to avoid cyber-bullying has become an increasingly relevant topic of the 21st century. Having been a victim of bullying at a country-based grade school during the 1970s, the subject of bullying continues to intrigue me as it increasingly manifests in today's schools to the point of causing suicide. A recent report by the American Academy of Child & Adolescent Psychiatry (2008) revealed that almost fifty-percent of school children experience bullying, and that experiencing bullying affects academic performance, and the social and emotional development of school children. Empowering grade school children and parents with tools to avoid cyber-bullying requires research into the security features of e-communications such as provided by cell phones and other e-tools. While schools can prevent use of e-tools like cell phones and computers during school hours, students and parents carry the responsibility of after-school e-tool use.

3. Order of Generality

Having been a victim of verbal bullying at a country-based grade school during the 1970s, the subject of bullying continues to intrigue me as it increasingly manifests in today's schools to the point of causing suicide. A recent report by the American Academy of Child & Adolescent Psychiatry (2008) revealed that almost fifty-percent of grade school children experience bullying. Experiencing bullying affects academic performance, and the social and emotional development of school children (American Academy of Child & Adolescent Psychiatry, 2008).

Unfortunately, the widespread use of cell phones and social networking sites by grade school students to communicate with each other has manifested in additional avenues for transmitting abusive behavior. While schools can prevent the use of e-tools like cell phones and computers during school hours, students and parents carry the responsibility of after-school e-tool use. Empowering grade school children and parents with tools to avoid cyber-bullying requires research into the technical aspects of e-communications such as provided by cell phones and other e-tools. By providing students and parents with training on how to set up and implement the security features on cell phones and computers, abusive behaviors are avoidable.


References

American Academy of Child & Adolescent Psychiatry. (2008, May). Bullying: Facts for families (No. 80). Retrieved January 17, 2010, from http://www.aacap.org/galleries/ FactsForFamilies/80_bullying.pdf

The Continuing Decline in the USs Educational Ranking

Although the International Association for the Evaluation of Educational Achievement (IEA), which is an international consortium of research institutions in more than forty countries, has worked over fifty years on improving student achievement globally, the continuing decline in the USs educational ranking in past years is problematic culturally and competitively speaking. Declining achievements in math and science speak of the failure of America's educational reformers to enact sufficient teaching practices within curriculums. Continuing research contained herein addresses areas of recent governmental, researchers', and academic leaders' thoughts and efforts to improve students' educational achievements in America and globally.

Examination of academic processes in countries improving in academic ranking, which signify portals of academic excellence, may provide solutions for not only improving student achievement in America but also in other countries who have reported declining learning achievement. Identifying factors that have enabled other nations to excel in academic performance provides an opportunity for America's leaders to avoid the continuation of devastating economic harm, which has been manifesting for many years. No easy answer exists. Without a global cooperative effort, and utmost dedication toward implementing solutions, there remains an exceedingly strong doubt that progress in educational performance is possible in the United States and elsewhere.

Investigating and identifying pockets of academic excellence globally can provide solutions for America's educational and government leaders who need to address America's falling academic performance to remain globally competitive. The following literature reviews provide insight into possible answers as to why other countries' academic performance has exceeded America's performance. Identifying factors that have enabled other nations to exceed in academic performance can provide solutions to turning the tide on America's continuing trend toward decreasing academic performance.

Literature Review

Mohrman (2008) investigated China who is "dealing with a new set of values (primarily from the West) emphasizing economic efficiency, privatization, individual autonomy, and globalization" (p. 30). Mohrman (2008) remarked that China's new set of values demonstrates a response to changes in higher education and society related to China's adoption of the Emerging Global Model (EGM). Since "China is unique in educational history in simultaneously pushing for rapid enrollment growth, instituting new governance structures, and seeking to build world-class universities" (Mohrman, 2008, p. 30), promising information may provide examples and answers for other struggling nations.

Mohrman (2008) notes that China is leading the world in student enrollments, research intensity, positive changes in the academic profession, internationalization, recruitment of foreign scholars, adoption of foreign curricular models, and so forth. Mohrman's (2008) findings indicated that "Chinese academics…are quick to say that their universities have a long way to go before they can honestly claim world-class status" (p. 46). However, in less than 30 years, China progressed from no schools to rapid advancement in global competition in higher education (Mohrman, 2008). Due to the article's analysis and findings being based upon 100 interviews of staff from Chinese universities and government agencies, it is apparent that Mohrman (2008) reported logically and accurately on the aforementioned findings.

Lang and Zha (2004) provided an analysis of the "theories and methods with respect to a specific form of university comparison, peer selection, in Western higher education literature, then focuses on a case study of the peer selection practice of the University of Toronto in Canada, and attempts to depict the implications for Chinese universities" (p. 340) as a means to answer the question, "what is a world-class university?"(p. 340). "Peer comparisons can provide a basis for the rational evaluation of differences and of similarities among institutions, and of identifying relative strengths, weaknesses, and possible opportunities or niches" (Lang, et al., 2004, p. 341). Establishing standardized criteria for measuring educational quality among universities is critical to identifying quality versus sub-standard institutions of learning.

Lang (et al, 2004) reported that "there was a peer selection problem that made the benchmarks problematic" (p. 341), which could understandably result in an irrational evaluation of the differences and similarities among institutions. Lang (et al, 2004) found that "university comparison or peer selection exercises should start from program comparison…(however) very few league tables and rankings function at the program level" (p. 352). Achieving reliable and valid data for comparing universities' programs and peer selection would be enhanced if China adopted the CUDEC and AAUDE databases as used by Toronto (Lang, et al, 2004). "Only with
sufficient data, the comparative analysis concerning Chinese universities’ strength and identity in the world university community can be valid" (Lang, et al, 2004, p. 353). Lang and Zha (2004) provided logical results in the article, and their conclusions provided sound advice not only for China but for all nations. Assessing the quality of universities worldwide in order to identify how universities may improve relies upon accuracy when comparing data.

Adler and Harzing (2009) appropriately identified their research topic by quoting Einstein who said, "not everything that can be counted counts, and not everything that counts can be counted” (p. 72). Adler's (et al, 2009) research focused upon "the problematic nature of (the) academic ranking systems and question if such assessments are drawing scholarship away from its fundamental purpose" (p. 72). In addition, based upon Adler's (et al, 2009) findings, the authors call for a "temporary moratorium on rankings (which) may be appropriate until more valid and reliable ways to assess scholarly contributions can be developed" (p. 72).

Indeed, Adler (et al, 2009) asked if universities remember "that their primary role is to support scholarship that addresses the complex questions that matter most to society?" (p. 72). A dangerous trend in ranking individuals and universities relies upon reviewing a particular subset of journals, which are also written only in English (Adler, et al, 2009). "Academia needs to stop measuring success by where scientists publish and [to] use different criteria, such as whether the work has turned out to be original, illuminating and correct” (Adler, et al, 2008, p. 78). “What is our scholarship actually contributing?” asks Adler (et al, 2008, p. 92). Whether using metrics for counting publications or citations, the question remains: “Has the scholar asked an important question and investigated it in such a way that it has the potential to advance societal understanding and well-being?” (p. 92). The authors pose several important questions regarding the global society but are hard-pressed to provide conclusive solutions.

Aguillo, Ortega, and Fernandez (2008) reported on how the worldwide web proliferation has resulted in the development of web indicators, which are used to construct universities' rankings. In particular, Aguillo (et al, 2008) presents the "Webometric Ranking of World Universities which is built using a combined indicator called WR that takes into account the number of published web pages…the number of rich files, those in pdf, ps, doc and ppt format" (p. 233), and so forth. The Webometric Ranking shows that "there is a larger than expected academic digital divide between higher education institutions in the United States and those in the European Union" (Aguillo, et al, 2008, p. 233) because, surprisingly, "many scholars' web presence is not related to their academy duties and they are ignoring requests to contribute to the common effort" (p. 233) thereby causing isolationism.

Aguillo, Ortega, and Fernandez (2008) concluded that "web indicators should be used to measure universities’ performance in conjunction with more traditional academic indicators" (p. 233). New web indicators are solving issues related to the "instability of search engine results and the artefacts produced by the Web Impact Factor" (Aguillo, et al, 2008, p. 234). Aguillo (et al, 2008) reported that "Compared to other rankings results, the number and positions of the US universities are far bigger and better than their European counterparts, even considering British institutions" (p. 243), which has not been empirically tested. "There are prestigious universities underperforming in the webometrics arena due to erroneous decisions, incomplete mandates or insufficient motivations regarding their web policy" (Aguillo, et al, 2008, p. 243). Aguillo (et al, 2008) presented several charts revealing worldwide universities' web presence; therefore, based upon empirical data, the conclusions by Aguillo (et al, 2008) appear to be well formulated, logical, and based upon sound study.

Williams (2008) reported upon "University rankings (that) are having a profound effect on both higher education systems and individual universities, discuss(es) the desirable characteristics of a good ranking methodology and document(s) existing practice(s), with an emphasis on the two main international rankings (Shanghai Jiao Tong and THES-QS)" (p. 51). Williams (2008) wrote that "A university should be ranked highly if it is very good at what it does" (p. 52) but that whole-of-institution rankings must "recognise institutional differences (that) should either be conducted separately for different types of institutions or be obtained by aggregation of rankings at a sub-institutional level" (p. 52). While "national research funding agencies may rate research groups, this requires too much detailed information for international comparisons" (Williams, 2008, p. 53).

Williams (2008) discussed the quantitative measures of research performance, and measures of learning and teaching, which adds additional relevance to his research. Reflecting upon international university evaluators' ranking methodologies, and conducting a survey of CEOs from leading international research universities, adds credibility to Williams' (2008) article. Williams (2008) supplied a recent report on global universities' ranking (Chart B), which overrides other research reporting that America ranks 30th in the world. Such information is invaluable in investigating not only how to improve America's educational performance level, but if the ranking system used globally is valid.

Williams (2008) remarked that "there is a need…for an ongoing ratings research group, at arms length from the universities and government, perhaps as a component of some form of tertiary education council" (p. 57). When universities and governments introduce financial incentives or wish to gauge performance, "whether in monitoring and fostering research, good teaching, evaluation of disciplines, and so on" (Williams, 2008, p. 57), the tertiary education council could provide the required methodologies (Williams, 2008). Williams (2008) provides helpful detail on how universities may determine ranking based upon performance measures, and thus identify a World-Class University (WCU):

(i) "a high concentration of talent (faculty and students), (ii) abundant
resources to offer a rich learning environment and conduct advanced
research, and (iii) favourable governance features that encourage
strategic vision, innovation and flexibility, and enable institutions to
make decisions and manage resources without being encumbered by
bureaucracy" (p. 58).

Rustique-Forrester (2005) reported on recent studies, which found conflicts regarding whether "test-based rewards and sanctions create incentives that improve student performance, or hurdles that increase dropout and pushout rates from schools" (p. 1). England's accountability reforms, which increased stress associated with students' test-taking, changed ranking systems within schools, and other changes may contribute to student expulsion and suspension (Rustique-Forrester, 2005). Rustique-Forrester's (2005) study provides "an international perspective on recent trends toward greater accountability in education, pointing to a complex inter-relationship between the pressures of national policies and the unintended consequences on schools’ organizational and teachers’ instructional capacities" (p. 1).

Designing accountability systems especially in the United States must include consideration of any inherent pressures, which may or may not affect the ability of schools and teachers to address the learning needs of lower-performing students (Rustique-Forrester, 2005). Rustique-Forrester's (2005) study findings were "that England’s high-stakes approach to accountability, combined with the dynamics of school choice and other curriculum and testing pressures led to a narrowing of the curriculum, the marginalization of low-performing students, and a climate perceived by teachers to be less tolerant of students with academic and behavioral difficulties" (p. 1). Approximately 270 teachers at four schools were interviewed, and data were compiled using charts supplied in the article.

Rustique-Forrester (2005) recommends that future research look at the "complex interactions that accountability policies will have with other aspects of a state’s educational system, especially with regard to policies on testing, graduation, and choice" (p. 32). Rustique-Forrester's (2005) report on the "dynamics of exclusion…and other mechanisms and measures used to judge schools" (p. 33), will be helpful in determining associated effects as research continues on evaluating ranking criteria internationally. The findings and conclusions as stated by Rustique-Forrester (2005) appear to be logical, unbiased, and sound due to the relevant scope of the research methodologies used.

Baker and Wiseman (2008) compiled a book containing works by several authors such as Gaele Goastellec who wrote that "analyzing changes in access to higher education from an international comparative perspective helps to understand some of the main worldwide transformations of higher education systems and to identify both quantitative and qualitative trends, as well as policies and organizational processes" (p. 1). John C. Weidman and Adiya Enkhjargal (Baker, et al, 2008) disclosed that "regardless of increasing success, fighting corruption on a global scale has become ever more necessary. More than 20 percent of loans distributed by the World Bank are linked to corruption" (p. 63 ).

Furthermore, Baker and Wiseman (2008) included research by Philip Altbach and Patti McGill Peterson who presented an assessment that there is a "neglected element of higher education worldwide – the potential and reality of its contribution to the ‘soft power’ (encompassing the nexus of influences in world affairs that relate to culture, science, technology, and other subtle forces of nations)" (p. 314). Altbach and Patterson (Baker, et al, 2008) also recognize that "institutions of higher education are central to a country’s national as well as international aspirations" (p. 313). Altbach and Patterson (Baker, et al, 2008) wrote that "US academic and research systems remain the strongest in the world" (p. 323). Although agreeing with Altbach and Patterson's (Baker, et al, 2008) previous statement above, plainly they need to relook at their opinion that the US academic system is the world's strongest; much research proves otherwise.

Altbach and Patterson (Baker and Patterson, 2008) noted that "other regions are successfully competing with the United States for foreign students" (p. 325), which could foretell a reason for America's declining global educational positioning. In fact, Altbach and Patterson (Baker, et al, 2008) report that "A poll of 22 countries by the BBC World Service revealed that China is viewed as playing a more positive role in the world than the United States" (p. 325). Transnational effects and influences affecting global reorganization of foreign students needs further analysis to determine its efficacy in causing declining US educational performance.

Determining the validity of worldwide educational performance rankings is relevant to the research objectives. Kovaleva (2010) reports that the TIMSS study was "conducted by the Center for the Evaluation of the Quality of Education in the Institute for the Content and Methods of Instruction, Russian Academy of Education…the Ministry of Education and Science of the Russian Federation, the Federal Service for Oversight in the Sphere of Education and Science" (p. 73) and other administrative educational groups located in the participating geographical regions. Curiously, there is no international entity appearing to be represented.

However, further research will disclose any international relationships. Determining whether the TIMSS study as discussed by Kovaleva (2010) was a collaborative global undertaking is important because it is vital that a global review of educational performance strategies is conducted rather than contained within nations. If research findings reveal that the TIMSS study is not a worldwide analysis, reformation of educational strategies to improve learning performance on a worldwide scale is inhibited, and is a basis for recommending how to improve international learning outcomes.

Kovaleva (2010) provides specific details on how the TIMSS study is conducted. For example, the TIMSS study evaluates the: (1) comparisons between international groups of students, (2) distinguishing changes in the quality of teaching, (3) changes in teaching methodologies, (4) characteristics of the curriculum content, and (5) factors influencing the quality of teaching. Using the aforementioned categories as guidelines, Kovaleva (2010) reports on the findings from the TIMSS study. After summarizing the significant results of the study, Kovaleva (2010) presents several questions for researchers and educators, which underscores the need for further research. Implementing the key variables disclosed by the study, which could improve learning on a global scale, enhances the potential for attainment of the study's objectives as Kovaleva (2010) predicts.

Sanchez (2010 ) reports on America's reaction as the "results of the Program for International Student Assessment, or PISA…showed students in Shanghai well ahead of the global pack. The study tested 15-year- olds from 65 Organization for Economic Cooperation and Development member nations in math, science and reading" (para 3) with the United States ranking 24th. Reacting to yet another year of falling performance in education, "Britain, France, Germany and other nations announced plans to study and overhaul their educational systems. The U.S. pretty much shrugged. And some began searching for faults in the analysis" (Sanchez, 2010, para 5).

Unwilling to accept the data results, American educators and lawmakers address the newest PISA figures by adjusting the data. For example, by subtracting "the inconvenient students (the underachieving minority students)" (Sanchez, 2010, para 6), America's ranking improves. Sanchez (2010) writes that anyone relishes to "know the biggest problem in the U.S. education system, it's this: inequality" (para 7). "The highest achieving school systems in the world were the ones where social class tends not to predict student achievement" Sanchez, 2010, para 8). The PISA report demonstrated that student achievement is higher when "students from all social and economic backgrounds are well represented among highest academic achievers" (Sanchez, 2010, para 8).

Sanchez (2010) further reported that in the fifteen years before the 2008 PISA report, "the U.S. fell from being ranked No. 2 to No. 13 in college graduation rates" (para 10). If America could reduce class disparities in our education system by "significantly boosting diploma rates in four years for blacks (46 percent), Hispanics (44 percent) and American Indians (49 percent)" (para 11). Sanchez (2010) suggested that educational systems in America must be accessible to all, "globally competitive on quality; provide people from all classes a fair chance to get the right kind of education to succeed; and achieve all this at a price that the nation can afford" (para 12). If America's labor force begins to gain a knowledge-based education, economic prosperity worldwide could result (Sanchez, 2010).

Levy (2010) presents insight about a global decrease in post-secondary education in private institutions, and indicates that there are two dominant reasons for this phenomenon: (1) "social and (2) political or public-sector policies" (p. 1). The goal for reviewing Levy's (2010) article is that his findings may indicate that a decrease in post-secondary education in private institutions reflects the tendency for some nations' declining educational performance. Identifying reasons for declining performance enables recommendations to improve performance.

Levy (2010) evaluated several countries' private post-secondary institution enrollment trends, and noted that "From 1996 to 2006, Georgian (Eastern Europe) private higher education fell from 34 to 22 percent of enrollment" (p. 2). Fortunately, there is "evidence of new initiatives includes reaching out (including internationally) to new kinds of students, in new modalities" (Levy, 2010, p. 5). "The decline of private higher education warrants analysis for contemporary dynamics as well as historical and future ones" (Levy, 2010, p. 5). Discovering areas of unfinished research, or data needing to be updated, provides researchers evidence the further investigation is needed to provide answers for failing private higher education institutional enrollment, which could signify that new in depth research may help to create recommendations for improving educational performance.

Mahoe (2004) details her perspectives on preparing a dissertation, which involved the use of quantitative research methods. In particular, Mahoe's (2004) writing goal for her dissertation included providing advice and encouraging words to her peers about working on a dissertation with unrelenting perseverance. Mahoe's (2004) exuberance arose from the practice gained during her dissertation process that resulted in "one of her most rewarding challenges" (p. 34). Mahoe's (2004) greatest reward was formulating answers to improve retention of public high school freshman students.

Mahoe (2004) identified areas for integrating her dissertation topic's research needs into her teaching profession. Preliminary investigation of available research gained Mahoe (2004) few results until expanding the topic's parameters on high school freshmen retention. Examination of the expanded assortment of resources pinpointed Mahoe's (2004) research question successfully.

Mahoe (2004) supplemented her collection of resources by taking advantage of her network of peers at her high school. In addition, after Mahoe (2004) identified deficiencies in the sorting standards within the available retention data, a more accurate and decreased retention percentage was identified, which eliminated discrepancies, and improved reliability and validity in the data for her study. However, the improvement seen in the retention rate using the data from her school, which conflicted with Mahoe's (2004) thesis, made her pause and reflect, and resulted in an ability for Mahoe (2004) to critically evaluate the efficacy of her project. Upon identifying a dependable nationwide database on student retention, Mahoe (2004) became convinced her study could proceed.

Furthermore, Mahoe (2004) chose additional subjects to include in her study: "how school structures and processes serve as supports to students’ academic and social engagement (for all four years of high school), and their subsequent influence on student persistence" (p. 36). Whether conducting original data collection or using secondary data, Mahoe (2004) advised that researchers (1) "should be able to give a plausible explanation for every significant and non-significant finding based on the pertinent literature" (p. 36), and (2) "use all the members of (the) dissertation committee" (p. 37). Each committee member (1) should be kept informed, (2) has inimitable expertise, and (3) provides a knowledge resource (Mahoe, 2004). While writers have expertise about their subject matter, the dissertation team promotes writers' efforts to an advanced scholarly level (Mahoe, 2004).

Moore's (2010) dissertation presents an investigation into the technical needs of multicultural societies. Moore's (2010) topic is relevant to this writer's research because he suggests that technological tools such as web sites' graphics be as universally recognizable as possible to facilitate good global communication. Identifying the academic processes in countries who are improving in academic ranking may demonstrate that universally recognizable graphics has enabled some nations to advance faster in academic performance. Therefore, Moore's (2010) suggestions shed light upon one academic process that could be used in countries who are improving in academic ranking. By collaboratively integrating universally recognizable graphics into global academic processes, all nations would receive a similar advantage, and experience improved communications.

Logically, for a global effort to successfully improve learning performance, educators around the world need to communicate well. Technical writers who are creating the communication interfaces must provide educators with the ability to have improved communications. Moore (2010) suggests that a cautious use of graphical designs can reduce barriers caused by language and cultural differences. Working in "multicultural and international collaboration teams…is effective and appropriate for our audiences" (Moore, 2010, p. 60). Boland's (2004) dissertation presented information from several empirical studies, which included Boland's (2004) study, that tested Intelligence Quotients (IQ).

Boland (2004) reported on how IQ scores from the studies were affected by the "level of education, quality of education, country of education/residence, and level of acculturation to the majority U.S. culture" (p. 122) using two Asian Indian populations living in America and India. Boland (2004) wrote that "it is not that individuals from certain cultures are lacking in certain cognitive abilities, but rather that only those abilities that are relevant and useful in a particular cultural context are developed and expressed" (p. 21). Therefore, while the "cultural environment…has a significant effect on the intellectual skills that are developed" (p. 20), Boland's (2004) study disclosed that educational levels closely reflected IQ scores "regardless of country of education/residence" (p. 122).

Reviewing Boland's (2004) dissertation aids this writer's investigation into why other countries' academic performance has exceeded America's performance, and suggests solutions to improving academic performance. Specifically, Boland (2004) provided valuable information for this writer's dissertation research because the data collected by the International Association for the Evaluation of Educational Achievement (2007), which reports each country's educational performance rank, cannot accurately report the rankings if the IAE (2007) depends partially upon
a student's IQ. IQ should not be considered by the IAE (2007) because students at the fourth and eighth grade educational levels who are assessed around the world by the IAE (2007) have closely matched IQs at each grade level, and would not cause disparity.

Consequently, measuring students' IQs would be irrelevant and biased. Once the IAE (2007) data is reviewed for any remarks on IQ, this writer can determine if there is bias in the IAE (2007) report. Improving learning performance globally, and identifying why America's rank has slumped, first depends upon an accurate and unbiased assessment tool for ranking each country. Assuming that the current assessment tool is unbiased would be seemingly foolish. Wang and Lin (2005) published dissertation reported on several studies revealing that American students performed more poorly on international tests when compared to students in Eastern Asia. Study findings noted significant differences in the development and implementation of curriculum policies and materials between America and excelling Eastern Asian countries (Wang, et al., 2005). In fact, less focused and more repetitious curriculum materials were used in America, and American "curriculum policy is less authoritative, less specific, and less consistent" (Wang, et al., 2005, p. 3).

Interestingly, although Eastern Asian students overall achieved higher scores than American students, and exhibited superior computational and routine problem-solving skills, American students performed the same or better than their Chinese counterparts on "open, creative problem-solving tasks" (p. 4). Consequently, America should not necessarily duplicate Chinese instructional practices in an attempt to improve U.S. mathematics performance (Wang and Lin, 2004). "Although Chinese students are stronger than U.S. students in abstract mathematics reasoning and representation, Chinese students do not show stronger performance in graphing, using tables, and open-process problem solving" (Wang, et al., 2004, p. 6).

Contrasting learner outcomes when no reason is easily identified raises questions about the efficacy of the survey methodology. Caution is indicated when accepting such survey results, and further validation is recommended. Noteworthy research reported by Wang and Lin (2004) indicated that when comparing Chinese and American teachers, the Chinese teachers more effectively used available teaching time for student learning, developed more organized "whole-class instruction, and offered more complex explanations and feedback to their students" (p. 7). In addition, family values and processes added to the degree of variation between Chinese and American parents. For example, Wang (et al., 2004) reported that Chinese parents establish family studies relevant to values and processes, set higher challenges for their "children’s mathematics achievement, engage their children in working more on mathematics at home, and use formal and systematic instructional approaches at home" (p. 9). Wang and Lin (2004) discussed research findings by several peers, and offered several scenarios for future research.

The paper by Wang and Lin (2004) presented some discomfiture as differences between races were mentioned rather than offering suggestions for mutual growth opportunities globally. As mentioned above, if nations could work collaboratively to improve global academic processes, which would minimize the negative aura created by competition and ranking, the differences and similarities as discussed by Wang (et al., 2004) would diminish as nations learn to experience comparable opportunities to improve education.

"The International Association for the Evaluation of Educational Achievement (IEA) is an international consortium of research institutions in more than 40 countries" (Beatty, 1997, p. 3), which "describes and explains differences in student achievement" (p. 3). The IEAs focus is "to improve the teaching and learning of mathematics (and science) around the world" (Beatty, 1997, p. 3). Beatty (1997) wrote that "international comparative studies (have shown that) education systems vary substantially" (p. 1). "The content of mathematics and science curricula and textbooks" including "student attitudes and experiences, teaching practices, and school resources" (Beatty, 1997, p. 3) are just a few of the IEAs research concentrations. "The heavy interest on standardized test scores in the United States has distorted both curricula and expectations for student learning" (Beatty, 1997, p. 22).

Science teachers in the United States "average significantly lower hours per week devoted to both professional reading and development and to lesson planning than did such higher scoring countries as Japan, Hungary and Singapore" (Beatty, 1997, p. 25). "No reform ought to be undertaken without a commitment to three things: provide adequate resources, sustain (reform) it long enough to be sure it has had time enough to take hold, and evaluate its impact" (Beatty, 1997, p. 30). Further research, "dialogue and debate based on the TIMSS results would help decision makers focus their reform efforts" (Beatty, 1997, p. 30).

Realonlinedegrees.com (2010) reported that "When it comes to the mathematics and sciences, U.S. 8th grade students are lagging behind other countries" (para 1). Lagging test scores have increased concerns by lawmakers and educators, and predicts a worsening global economy for America. Surprisingly, test scores may not represent a complete assessment of how America's students are faring in math and science. According to The Bent of Tau Beta Pi (2007), “Students in affluent suburban U.S. school districts score nearly as well as students in Singapore, the runaway leader in the Third International Mathematics and Science Study (TIMSS) math scores" (p. 13).

Due to an observation that the "gap between America’s top-performing schools and low-performing schools is significantly greater than the gap between America and other nations…and attention toward what affluent school systems are doing well" (The Bent of Tau Beta Pi, 2007, p. 14) could replicate those same results across the board. For America to compete globally requires that more not fewer college graduates become physicians, scientists, or engineers. Lawmakers state that America does not do an adequate job of preparing individuals for technological fields, and focusing on the overall positive results of our educational system is more important than test scores alone.

The Organisation for Economic Co-Operation and Development (OECD) (2010) reported that Korea and Finland top the OECD’s latest Programme for International Student Assessment (PISA) survey of a half-million 15-year olds' reading literacy, math, and science in more than 70 economies, which for the first time tested students’ ability to manage digital information. The OECD (2010) Secretary-General Angel Gurría stated that “Better educational outcomes are a strong predictor for future economic growth (and) while national income and educational achievement are still related, PISA shows that two countries with similar levels of prosperity can produce very different results" (para 5). Interestingly, "the best school systems were the most equitable - students do well regardless of their socio-economic background. But schools that select students based on ability early show the greatest differences in performance by socio-economic background" (OECD, 2010, para 9).

The OECD (2010) also found that "High performing systems allow schools to design curricula and establish assessment policies but don’t necessarily allow competition for students" (para 12). An important goal of the OECD (2010) is to aid countries in seeing "how their school systems match up globally with regard to their quality, equity and efficiency" (para 17). Significant gains in improving learning achievement reveals that "the best performing education systems show what others can aspire to, as well as inspire national efforts to help students to learn better, teachers to teach better, and school systems to become more effective" (para 17). Apparently, educators and lawmakers believe that test scores adversely impact future economies.

The US Department of Education (2010) reported on comments made by Secretary Arne Duncan at the Council on Foreign Relations Meeting relevant to international engagement through education. Secretary Duncan admitted that the United States had not "been compelled to meet our global neighbors on their own terms, and learn about their histories, values and viewpoints" (para 7). Only "40% of our 25-34 year olds earn a two-year or four-year college degree—the same rate as a generation ago" (US Department of Education, 2010, para 12). Secretary Duncan noted that "on recent international tests of math literacy, our 15-year-olds scored 24th out of 29 developed nations, and 21st out of 30 nations in science. The U.S. is now 18th out of 24 industrialized nations in high school graduation rates" (US Department of Education, 2010, para 13). America now has "partnerships with other nations yield(ing) a wide range of bilateral education conferences, alliances, and other joint efforts" (US Department of Education, 2010, para 28). "Such collaboration can inform and strengthen our reform efforts nationally, even as it helps improve standards of teaching and learning—and fosters understanding—internationally" (US Department of Education, 2010, para 34).

Cohen, Bloom, and Malin (2006) review research related to primary and secondary students' academic performance in primary and secondary schools globally. Major topics included the state of education, "the quality and quantity of available data on education, the history of education and obstacles to expansion, the means of expanding access and improving education in developing countries, estimates of the costs, and the potential consequences of expansion" (Cohen, et al., 2006, p. 2). The book is relevant to the subject under research because it is reported that educational data from global sources, which is so important to empirical research, is not completely reliable nor comprehensive. The UNESCO Institute of Statistics in Montreal, however, contains the world's most accurate information, and could be investigated as the research in educational ranking proceeds.

Surprisingly, some challenges facing an international goal of enabling all children with a good primary and secondary education includes enrolling ninety-seven million primary-school, and two-hundred and twenty-six million secondary-school age children, which does not even include the other ninety million five-to-seventeen years old children in the developing countries. Does enabling educational expansion, which secures societal and political infrastructures, currently affect educational ranking globally? Cohen, Bloom, and Malin (2006) propose that educating children is universally accepted as a humanitarian responsibility, and a directive to ensure human rights.

Barriers to international communities who crave to increase their educational rank by improving educational access for their populace are confronted by leaders already over-burdened by needs to eliminate corruption, improve culture, and decrease political division caused by differing agendas. Providing inaccurate data on countries' educational systems hinders the ability for creating an efficient educational policy, which relies on ensuring that politicians and school leaders are accountable. Difficulties arise when citizens' higher concerns are providing food, housing, and medical care for their families. Fortunately, international neighbors are achieving a more interconnected educational system, which supports a global labor market, a stronger society, and a freedom to learn. These factors and more need examination as the investigation continues into why other nations have excelled in academic performance.

Smith's (2005) book offers an expose' about academic under-achievement of nations and schools demonstrated by falling educational standards as a response to ensure that all children are equitably taught. The implementation of high-stakes testing, and new school accountability systems are in actuality resulting in schools being labeled as under-achieving due to inaccurate methods of reporting data. In response to this dilemma, policy makers and teachers condone partnered research incentives with the goal of improving student achievement.

Achieving such a goal requires improving the quality of data collection within countries' educational systems as suggested by Cohen, Bloom, and Malin (2006). Smith's (2005) book likewise provides insight into reconsidering how educational research is conducted. As the investigation continues into why other nations have excelled in academic performance, critical review of research methodologies is essential to finding the answers about why there are differences in educational ranking globally. Specifically, if America is ranked by two reports as twenty-fifth and thirtieth in the combined math and sciences areas, then the data is not being analyzed similarly. Therefore, the reasons that America is doing better or worse than its ranked position cannot be accurately discerned. Discerning why America is doing better or worse is the purpose of the primary investigation.

Salmi (2009) discusses how the increased sharing of tertiary educational resources has prompted institutions worldwide to reassess their goals. Increased and varying reports on educational rank are affecting nations' growing aspirations to be positioned at or near the top in academic achievement, which includes outstanding performances in research. As governments and universities increasingly seek to become the best in academic performance globally, reliance upon comparison data gains in significance for many nations but not for all nations. If achieving world-class educational standards, which have not been accurately defined nor universally accepted, reflects upon a nation's successful ability to inwardly improve its infrastructure, then such achievement may promise a strong future. Salmi's (2009) book provides tools for nations seeking such a future. Further review of Salmi's (2009) toolbox contents could shed some light on how America's educational leaders could take advantage of such tools to improve its educational performance.

Summary of Important Research Findings

Several researchers as noted in the literature review disclosed problems related to the reports issued on ranking worldwide educational performance. For example, Lang and Zha (2004) noted there is no standardized criteria for measuring educational quality among universities. In fact, Adler and Harzing (2009) called for a moratorium on ranking until validity and reliability of data analysis could be assured. Aguillo, Ortega, and Fernandez (2008) suggested that web indicators alone suggest that methods to rank academic performance are vastly incorrect. Williams (2008) and Rustique-Forrester (2005) discussed the desirable characteristics of good ranking methodology due to his findings of invalid ranking methodologies. Kovaleva (2010) suggests that the TIMSS performance measures are unreliable. Sanchez (2010) reported that American educators do not accept the ranking, and readjust the data according to their performance measures. Moore (2010) suggested that universally accepted website graphics helped some nations to develop faster academically. Boland (2004) criticized that the ranking data measured performance based upon specific intelligence quotas. Due to these researchers, further investigation is warranted since there is a universal recognition of data reporting inconsistencies.

Research Problem

Nelson Mandela once said, "If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart" (US Department of Education, 2010, para 35). American educators are not yet "teaching our students how to speak to the hearts of our neighbors around the globe" (US Department of Education, 2010, para 37). Despite reports that America has started initiatives in improving global alliances, more research effort is needed to identify how other countries' higher performance in learning can be emulated in the United States. However, based upon research to date, several factors affect such identification.

For example, the literature review revealed that researchers involved in educational performance ranking are discovering areas of unfinished research, data collection inconsistencies, or data needing to be updated, provides researchers evidence that further investigation is needed to obtain answers to declining ranking in academic performance. Contrasting learner outcomes when no reason is easily identified raises questions about the efficacy of the survey methodology. Caution is indicated when accepting such survey results, and further validation is recommended. For example, does enabling educational expansion, which secures societal and political infrastructures, currently affect educational ranking globally? If so, is this information included in the data collection? Consistent reporting of how and what affects global ranking in educational achievement is missing. Educational data from global sources, which is so important to empirical research, is not completely reliable nor comprehensive but must be examined closely.

Methodology

Research addressing America's 30th ranking in worldwide education discloses that efforts to improve America's academic standing seems to be minimal as its position continues to decline. Investigating and proposing solutions to how and why America's academic position has reached such a precarious level requires an analysis of global efforts to improve academic performance, and supports a commitment to affect improved processes to America's educational processes and educational standards (The Regents of the University of Michigan, 2008). The University of Michigan (2008) indicates that identifying appropriate and applicable research to contribute solutions supports (1) economic goals by creating "cooperation between government(s), universities, and industries to develop capabilities and production techniques that improve our living standard" (para 4); and (2) health and social goals, which "addresses public concerns about school quality, the teaching of mathematics and science, and the improvement of family life, and quality of life" (para 5).

Consequently, providing inaccurate data on countries' educational systems hinders the ability for creating an efficient educational policy, which relies on ensuring that politicians and school leaders around the world are accountable. How will this be achieved? A critical review of research methodologies is essential to finding the answers about why there are differences in educational ranking globally, and to correct them. Collecting definitive information from the entities reporting global educational performance ranking is essential. Exposing inconsistencies
when comparing quantitative and qualitative data related to gauging educational performance must be the first priority in addressing the reliability and validity reporting concerns. Only by exposing the inconsistencies, and fixing them, can reliability and validity be achieved and accepted worldwide.

Once a true accounting of data collection is resolved by examining data from collecting entities, educators can depend upon the ranking information to guide them as they seek to know, and adapt the practices used by the excelling countries. Correcting the data collection inconsistencies will enable educators to rely on the information presented in future reports on learning performance ranking. Providing a viable solution to the issues existing in educational performance ranking equates to an original contribution to the academic field.

Conclusion

The investigation and identification of academic excellence globally offers
solutions for America's educational and government leaders who need to address America's apparently diminishing academic performance. Calling for reformation of America's educational systems may improve its chances to remain globally competitive, and economically more sound. The findings reported above provide insight into possible answers as to why other countries' academic performance has exceeded America's performance. Identifying factors that have enabled other nations to exceed in academic performance provide solutions, which may turn the tide on America's continuing trend toward weakening academic performance and
leadership.

Furthermore, if nations could work collaboratively to improve global academic processes, which would minimize the negative aura created by competition and ranking, the similarities and differences emulating from educational ranking would diminish, and nations could learn to experience comparable opportunities to improve education. Focusing on the overall positive results of our educational systems is more important than test scores alone.

References:

Adler, N., & Harzing, A. (2009). When knowledge wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, (8)1, p. 72. Retrieved January 14, 2011, from EBSCOHost.

Aguillo, I.F., Ortega, J.L., & Fernandez, M. (2008, July – October). Webometric ranking of world universities: Introduction, methodology, and future developments. Higher Education in Europe,(33)2/3, p. 233. Retrieved January 14, 2011, from
EBSCOHost.

Baker, D.P., & Wiseman, A.W. (2008). International perspectives on education and society, volume 9: The worldwide transformation of higher education. United Kingdom (Bingley): Emerald Publishing Ltd. Retrieved from http://site.ebrary.com.proxy.ncu.
edu/lib/ncent/docDetail.action?docID=10310678

Beatty, A. (1997). Learning from TIMSS: Results from the Third International Mathematics and Science Study, summary of a symposium. Washington, D.C.: National Academy Press. Retrieved December 14, 2010, from http://books.nap.edu/openbook.php?record_id=5937&page=R1

Boland, M.G. (2004), The effects of country of origin, education, and acculturation on intelligence test performance in Asian Indian and Indian American. (Doctoral Dissertation). Retrieved January 10, 2011, from ProQuest. (AAT 3143806)

Bozick, R., & Ingels, S. J. (2008). Mathematics coursetaking and achievement at the end of high school: Evidence from the education longitudinal study of 2002 statistical analysis report. Retrieved December 14, 2010, from http://www.eric.ed.gov/PDFS/ED499546.pdf

Brown, A.S., & Brown, L.L. (2007, Winter). What are science and math test scores really telling US? The Bent of Tau Beta Pi, p. 13. Retrieved December 14, 2010, from http://www.tbp.org/pages/publications/Bent/Features/W07Brown.pdf

Cohen, J.E., Bloom, D.E., & Malin, M.B. (2006). Educating all children : A global agenda. Cambridge, MA.: MIT Press. Retrieved January 14, 2011, from Ebrary.

International Association for the Evaluation of Educational Achievement. (2007). Trends in international mathematics and science study 2007. Retrieved January 2, 2011, from http://www.iea.nl/timss2007.html

Kovaleva, G. (2010, November). The TIMSS study: The quality of education in mathematics and natural sciences in Russia exceeds average international
indicators. Russian Education and Society, (52)11, p. 72. Retrieved from
http://web.ebscohost.com.proxy1.ncu.edu/ehost/pdfviewer/pdfviewer?hid=10&sid=626f0db2-0d25-46d9-8f8b-0037f211e5cc%40sessionmgr13&vid=2

Lang, D.W., & Zha, Q. (2004). Comparing universities: A case study between Canada and China. Higher Education Policy, 17, p. 339. Retrieved from ProQuest.

Levy, D.C. (2010, Fall). An international exploration of decline. Center for nternational Higher Education, 60, p. 1. Retrieved from https://htmldbprod.bc.edu/pls/htmldb/f?p=2290:4:330069749640906::NO:RP,4:P0_CONTENT_ID:112032

Mahoe, R. (2004). Reflections on the dissertation process and the use of secondary data. (Doctoral Dissertation). Retrieved January 11, 2011, from Education Resources Information Center (ERIC). (J877610)

Mapping 2005 state proficiency standards onto the NAEP scales: research and development report. NCES 2007-482. (2007). National Center for Education Statistics. Retrieved December 14, 2010, from http://www.eric.ed.gov/ERICWebPortal/
contentdelivery/servlet/ERICServlet?accno=ED497042

Mohrman, K. (2008). The emerging global model with Chinese characteristics. Higher Education Policy, 21, p. 29. Retrieved January 14, 2011, from ProQuest.

Moore, B.R. (2010). Designing for multicultural and international audiences: Creating culturally-intelligent visual rhetoric and overcoming ethnocentrism. (Doctoral Dissertation). Retrieved January 10, 2011, from ProQuest.(AAT 1485156)

Organisation for Economic Co-operation and Development (OECD). (2010). Programme for International Student Assessment (PISA) 2009 results: Executive summary. Retrieved December 8, 2010, from http://www.oecd.org/dataoecd/34/60/
46619703.pdf

Organisation for Economic Co-Operation and Development (2010, December 7). Education: Korea and Finland top OECD's latest PISA survey of education performance. Retrieved December 17, 2010, from http://www.oecd.org/searchResult/
0,3400,en_21571361_44315115_1_1_1_1_1,00.html

Realonlinedegrees.com. (2010). Education rankings by country. Retrieved December 24, 2010, from http://www.realonlinedegrees.com/education-rankings-by-country/

Rustique-Forrester, E. (2005, April 8). Accountability and the pressures to exclude: A cautionary tale from England. Education Policy Analysis Archives, (13)26, p. 1. Retrieved January 14, 2011, from ERIC.

Salmi, J. (2009). Challenge of establishing world class universities. Herndon, VA: World Bank Publications. Retrieved January 14, 2011, from Ebrary.

Sanchez, M. (2010, December 18). The Achilles heel of American education. The Virginian-Pilot and The Ledger-Star. Retrieved January 14, 2011, from http://findarticles.com/p/news-articles/virginian-pilot-ledger-star-norfolk/
mi_8014/is_20101218/achilles-heel-american-education/ai_n56513730/

Singh, G. (2008). Research assessments and rankings: Accounting for accountability in "higher education ltd". International Education Journal: Comparative Perspectives, 9(1), 15-30. Retrieved January 14, 2011, from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=EJ 94339

Smith, E. (2005). Analysing underachievement in schools. London, UK: Continuum International Publishing. Retrieved January 14, 2011, from Ebrary.

The Regents of the University of Michigan. (2008). The value of research. Retrieved December 8, 2010, from http://www.drda.umich.edu/
research_guide/research_briefing/2001/benefits2001.html?print

US Department of Education. (2010, May 26). International engagement through education: Remarks by Secretary Arne Duncan at the Council on Foreign Relations Meeting. Retrieved December 12, 2010, from http://www.ed.gov/news/speeches/ international-engagement-through-education-remarks-secretary-arne-duncan-council-forei

Usher, A. (2009). University rankings 2.0: New frontiers in institutional comparisons. Australian Universities Review, 51(2), p. 87. Retrieved December 12, 2010, from http://www.eric.ed.gov/ERICWebPortal/
contentdelivery/servlet/ERICServlet?accno=EJ864037

Wang, J., & Lin, E. (2005). Comparative studies on U.S. and Chinese mathematics learning and implications for standards-based mathematics teaching reform. (Doctoral Dissertation). Retrieved January 10, 2011, from ProQuest. (EJ727637)

Williams, R. (2008). Methodology, meaning and usefulness of rankings.
Australian Universities' Review, (50)2, p. 51. Retrieved January 10, 2011, from ERIC.