From New Deal Training Programs to World War II Testing:
Ideological maintenance, test standards, college entrance, and
predictive validity (1933-1946).
Paul F. Ballantyne
This chapter covers the training programs and testing technologies used in America during the Great Depression and World War II. As we are often reminded by psychology textbooks, improved statistical techniques developed during the depression provided a reliable means of sorting war industry workers and soldiers according to pressing and variable military need. What we are not often told, however, is exactly how the ongoing conservative assumptions and motives of the testing tradition diverged considerably from the progressive "New Deal" (and wartime) context of societal change in America. By highlighting the guidance and universality aspects of Federal New Deal Youth Programs (including their topical extension during wartime) and by contrasting them with contemporary refinements to the educational and vocational ability testing subdisciplines (including their widespread administrative extension during W.W.II), it is hoped that the selection and segregationist emphasis of the latter will become clear.
In section one, the economic and political context of the Great Depression (1929-1942), including its disastrous effects on public educational funding and the more positive emergence of the New Deal in the series of so-called alphabetic economic relief agencies (such as NRA, FERA, the CCC, and NYA) are covered. As the traditionally limited role of federal involvement in domestic economic affairs, ideological guidance, and educational funding was successively expanded the older economic impetus for psychometric sorting (entrenched into educational policy during the 1920s) receded into the background. While psychometric streaming of students continued in the ailing middle-class public school system, it received virtually no application in the historically important New Deal adult or youth programs (which emphasized civic works projects and applied vocational training) for the working class. Similarly, one high profile educational experiment also made a point of skirting the statistically centered tradition of standardized college entrance examinations. In this Eight-Year Study, standardized college entrance exams for selected schools were waived between 1933-1941.
Section two begins by outlining how the psychometric subdiscipline weathered the depression era market share slump for testing. The survival techniques initially involved both an active search for improved standards of test reliability, but also involved increased emphasis on promoting management biased vocational opinion polls, worker "attitude" assessment, and "social relations" training. Finally, the unprecedented W.W.II investment in testing technologies (for the "classification" of American inductees; the production of "qualifying exams;" and the "assessment" of specialized military training) is covered. This applied context of cooperation between the military, the APA, and the AAAP, provided a fair testing ground for the predictive validity of so-called general intelligence measures (such as the AGCT) and of more specific training or assessment devices used to select suitable candidates for specialized training (including pilot, bombardier, and Navy gunner selection).
The failure of the former "general" tests sparked a brief academic reevaluation of an older ontological issue (i.e., the relationship between an individual's generalizable academic intelligence vs. their more specific vocational knowledge). Ironically, however, by the end of the war, when the future educational prospects of American G.I.s and of national ethnic groups (especially "Negro" Americans) were under open debate in the halls of government, the fundamentally conservative psychometric subdiscipline was no closer to resolution on this issue then it had been 50 years earlier. The subdisciplinary emphasis had shifted back toward the safer empirical methods issue of predictive validity. This shift went hand in hand with an active promotion of vocational and intellectual testing measures as purely administrative devices for the post-war marketplace. Only after this marketability agenda was well established did there appear any passing acknowledgments that the "modern" conceptual and empirical adjustments of the psychometric tradition (including theoretical interactionism and operational definition) might be woefully inadequate for providing anything more than a descriptive analysis of the distribution or development of intellectual functions in human beings.
The Great Depression:
Educational Funding, New Deal Programs, and Ideological Maintenance
Throughout the Great Depression, the public schools and the federal government were in conflict regarding the proper means to avoid wasting a generation of youth potential. Their argument with each other was over how to best structure the social and societal context of American youths (aged 15-18) in order to promote the normal production of physical, mental, and civic competence. The ability testing tradition, as a third subdiscipline concerned with such questions predated this modern debate over the allocation of cultural/mental infrastructure.
The older tradition tended to reify the pre-existence of social and societal competencies (to a greater or lesser degree) as if they were simply allowed to "develop" (mature) within the broader realities of a cultural context. Thus, while the New Deal form of discourse tended to be transformative, the ability testing tradition tended to be merely additive or adjustive. The contrast with the public school discipline, on the other hand, was less striking. For various reasons, the public schools kept one foot in both of these camps. It did recognize, however, that the older meritocratic ideals of the ability testing approach to educational efficiency was inappropriate given the conditions of complete economic calamity.
Being hit hard by the depression meant that school administrators turned their attention away from "educational progress" of the 1920s and toward the pragmatic maintenance of a basic curriculum, of staff, and building facilities. Educational associations resented the federal government's expenditures on the New Deal youth programs (such as the CCC and NYA) and even resented the educational components of adult aid agencies (such as NRA, PWA, and WPA). Both types of program were blamed for drawing funds and youths away from the ailing public school system. Roosevelt's New Deal government, on the other hand, argued that "curricular entrenchment" (i.e., lack of vocational relevance of classroom work) had already caused droves of students to leave the public school system for the breadline. Federal programs which would address both the physical needs of youth and the wider ideological requirements of liberal democracy were now under consideration. Their goal would be to promote youth employment and forestall youth involvement in crime or extremist politics (Reiman, 1992).
The ideological installment of democratic values and public image management of federal civil authority was made an important undercurrent for both the "war on the depression" (Civilian Conservation Corps and the National Youth Administration programs) and the FBI's war on crime. Similarly, the successful, progressive educational institutions were now lobbying for a trial of their alternative curriculums in the context of higher educational institutions. Thus an "eight-year study" on collegiate entrance was held between 1933-1941. While these apparently transformative movements were underway, however, the pressing needs of national security resulting from the start of W.W.II were redirecting the emphasis of federal youth programs and or public school education toward defense readiness.
Public school funding after the crash
Historians typically date the start of the Great Depression to Black Tuesday, October 1929, when a long period of unrestrained corporate greed (dating back to the 1890s) precipitated a national stock market crash (Wecter, 1948; Shannon, 1960; Bird, 1966; Chandler, 1970; Duboff, 1989). Farmers and more rural cities, however, had been feeling the fiscal crunch throughout the 1920s as the European demand for American agricultural produce steadily decreased (Silverman, 1982). Indeed, although there was a growth in the number of middle-class Americans between 1922-1928, working people generally missed out on the "roaring twenties" (Bernstein, 1960; Shover, 1965). For instance, between 1920 and 1929, the income of the bottom 93 percent of the population rose only 6 percent, while that of the top 7 percent increased almost 200 percent (Rose, 1994; see also Leuchtenburg, 1958). Foreign demand for manufactured goods in turn began to slump and by the time of the big crash, two of every three American families was already poor (Hill, 1990).
Conversely, the consequences of the crash were felt only gradually by the middle-class biased public school system. The initial tendency, therefore, was to downplay the relevance of the wider economic crisis to the public school system (Rippa, 1962; Mirel, 1984). Like the incumbent Republican President Herbert Hoover, most school administrators by definition had bought into the ongoing business ethos of efficiency. They initially believed that the economic prosperity of the 1920s would soon reappear and that financial support for education would remain relatively stable throughout the crisis (see Callahan, 1961). Most city school systems, for instance, received slightly larger budgets in 1931-32 school year than they had in 1930-31. In fact, however, since the depression had now crushed real estate values, it would soon crush public education.
By the 1932-33 school year, with several large cities on the verge of bankruptcy, the possibility of hard times ahead was becoming more clear. The National Survey of School Finance, authorized by Congress in 1931, reported early in 1933 that half of the states obtained 90% or more (and five sixths of them 80%) of their fiscal support from locally collected general property taxes (Knight, 1951; Tyack, et al., 1984).
Within the immediate context of administrative control of the schools, detailed cost accounting was demanded of each district and each so-called frill program. Night schools, summer schools, kindergartens, and playground maintenance, music and nonacademic subjects (such as music, art, physical education, industrial education and special classes for physically or mentality handicapped) were cut entirely or severely scaled back. Teachers salaries which comprised 75% of the budget were also cut and teacher layoffs became commonplace after 1933. Some teachers were paid in "tax warrants," School Board IOUs, which many stores refused to accept and other accepted only at a discount (Urban & Wagoner, 1996; Button & Provenso, 1983).
By now, the era of blind compliance with local school supervisors had passed. Membership in teachers organizations (such as the American Federation of Teachers and the Progressive Educational Association) which had fallen victim to the anti-unionism of the 1920s began to climb steadily. In 1933, even the administratively controlled NEA was stirred to action. It formed the "Joint Commission on the Emergency in Education" which eventually suggested two major ways to solve the financial problems of local schools. One was the pursuit of direct federal aid for education, and the other was state tax reform.
While neither of these efforts was completely successful (see Wesley, 1957), the data gathered through successive NEA surveys allowed local school districts to compare their own fiscal circumstances with that of other similar districts for the first time. It became increasingly evident that more consistent state taxation and federal aid was needed to counterbalance the vast regional educational inequalities produced by the long-standing fiscal dependence upon local property taxes (see fig 38).
Figure 38 NEA School Survey. The above panel appeared in the America School Board Journal (1934). The Romanesque style woman represents public education, far above the din of industry and everyday life. She is reading the report of the NEA experts on how to improve the schools. The cartoon also hints at the post-stock market crash elevation of education (and teachers) relative to businessmen. The pervasive feeling that local business interests had betrayed the public schools was expressed in education journals from 1932 onward (e.g., Cubberley, 1933) and was quite a change of tune for a profession that had consistently celebrated progressive aspects of free enterprise during the 1920s. The rapid reverse of opinion struck some observers as skin-deep (see Counts, 1932; Newlon, 1934, 1939; Bowers, 1969). Most educators were sincerely distressed by the suffering they saw and angered by cuts in school budgets, but very few were yet radicalized. By 1934, the public education establishment would also claim that they had been abandoned by the lower-class biased New Deal educational initiatives which ran largely parallel to the traditional public school system. The schools, however, did benefit significantly through the mechanism of federally mandated public works projects on school buildings and grounds (photo from Tyack et al, 1984).
The Old Deal defines the "New Deal"
Along with the new Democratic President, Franklin D. Roosevelt, came a new approach to tackling the economic crisis. Roosevelt had defeated Hoover in the 1932 presidential election but it was another year before the first part of the National Survey data was available. In that time, the ostensibly self-correcting free market economy of Hooverism showed no signs of recovery. When Roosevelt was inaugurated on March 4, 1933, nearly 13,000,000 persons, or 25 percent of the nation's workforce were already unemployed and banks had been closed in thirty-eight states (Bernstein, 1970). Roosevelt soon put into place a successively more radical policy of direct federal economic intervention appropriately called the "New Deal" (see Bernstein, 1985).
Early New Deal Reforms
At first, the economic and political realities of the old deal set the limitations for the New Deal (Howard, 1943; Hawley, 1966). Between 1933 and much of 1934, the new administration had to contend with a sizable portion of American society that wanted to continue to employ the existing public institutions (such as public schools and existing governmental agencies) to remedy the ailing economy. Business interests (and those advocating further educational entrenchment) were still powerful and criminal corruption of the judicial system was combined with intimidation in the industrial workplace as the established order of the day (Bernstein, 1960). The early programs under the National Recovery Administration (NRA) were, therefore, quintessentially conservative. They were aimed at rescuing or reforming the old economy rather than transforming its structural underpinnings (Schwartz, 1982; Susman, 1983).
Walking the political tightrope between rising unionism and those of ailing big business, the New Deal administration portrayed their early economic measures in politically centralist terms as: (1) helping business be more safe for American tax payers; and (2) stimulating economic recovery by promoting cooperation between government, business, and labor (Baritz, 1960). For example, only after shoring up public confidence in the banking system with the federal Deposit Insurance Corporation, and only after similarly stabilizing ongoing agricultural overproduction through farmer subsidies (but also via forced removals of tenant farmers) under the Agricultural Adjustment Act (AAA), were the more novel (left leaning) series of "civil works" projects brought into place.
Similarly, the Public Works Administration (PWA), the Works Progress Administration (WPA), and the youth oriented programs -including the Civil Conservation Corps (CCC) and National Youth Administration (NYA) programs- were all initially portrayed as "relief" oriented programs aimed primarily at putting the nation's unemployed to work on politically defensible civic improvement projects (see Williams, 1939; Brown, 1940). The agencies were functionally conservative in their aim to remedy the ailing economy without disturbing the predominant middle-class values of the American industrial society (Schwartz, 1982). Likewise, the educational aspects of these "alphabet agencies" were consistently aimed at providing young lower-class White men (and a small percentage of Negro youths) with a basic level of vocational marketability that would be immediately useful in a post-depression economy.
Constant deference was shown to business interests throughout the early New Deal era. Successive repeals of minimum wage-rate policies; severe curtailment of "production-for-use" projects, and the abrupt cessation of the innovative Civil Works Administration (an attempted compromise between a relief and an "employment" program), were all enacted on the basis of their unfair competition with the private sector. Also, virtually all New Deal programs (even the federally sponsored migratory worker camps or "transient" camps) were de facto segregated with consistently lower pay and facilities being provided for Negroes (Salmond, 1965, 1967; Schwartz, 1982; Rose, 1994).
Even these relief programs, however, necessarily entailed sponsoring educational and training initiatives designed to allow unemployed and dispossessed workers to return to the regular workforce. The educational aspects of WPA, FERA, and the youth programs were not common schooling aimed at all segments of the population. They were specifically designed for the poor and staffed largely by people already on relief (Tyack, et al, 1984). This was a new style of education premised on the notion that all kinds of people can teach and that learning can take place in various settings. Public schools did, however, receive federal aid in the form of maintenance and school construction work done by WPA members so the New Deal programs were designed to supplement rather than supplanted the standard public school system.
In the novel aspects of early New Deal planning policies, however, the seeds were sown for the later more progressive reforms. These were to come especially after the renewed recession of 1938. Only then was it clear to most voters that continued bows to the ongoing business ethos would actually stand in the way of further economic recovery. Hence, the "later" New Deal reforms (between 1935 and 1941) were brought into play with the expressed intention of protecting citizens from the outmoded hazards an uncontrolled free market system.
Further New Deal Reforms
The Social Security Act (1935) to provide old-age pensions, and the National Labor Relations Act (1935), introduced by Senator Robert Wagner (a New York Democrat) can be viewed as successive steps toward the left in the domestic affairs of the New Deal administration. The "Wagner Act" is particularly relevant here because it: (1) provided a mandate for the National Labor Relations Board to adjudicate management-labor disputes; (2) guaranteed labor's right to organize unions and engage in collective bargaining; and (3) outlawed specific unfair labor practices. One of the most controversial sections of the N.R.A. (1933) was 7a, which said that employees shall have the right to organize and bargain collectively and shall be free form the interference, restraint, or coercion of employers.
Roosevelt had initially hesitated in supporting labor unions. The Wagner Act provided a clear message to large employers that unions were part of the solution to the nation's economic woes and not part of the problem (Bernstein, 1970, 1985). The Act also allowed so-called group relations to assume an importance of the first order. This created a market for the labor relations technology of sociologists and industrial psychologists. Employers could no longer treat employees on an individual basis but had to work out formalized company policies. Personnel selection, training, or advancement, were once considered the private prerogative of employers. They were now to becoming part of an institutionalized pattern of the workplace (Fairchild, 1937).
Image management and the FBI
Another important aspect of federal intervention, in 1935, was the formation of the Federal Bureau of Investigation. The new FBI was actually an amalgamation of three pre-existing agencies: the Prohibition Bureau (which had been transferred to the Justice Department in 1930), the Bureau of Identification (with its massive card file fingerprint repository under J. Edgar Hoover), and the Bureau of Investigation (also under the control of Hoover). The formation of the new FBI was politically orchestrated by the Homer Cummings (the incumbent Attorney General) but was soon under the de facto control of its appointed director Hoover.
Upon taking office, the Roosevelt Administration's original plan was to scuttle both the existing Prohibition Bureau (due to the proposed repeal of prohibition) and the Bureau of Investigation (as a cost cutting measure). Within the Department of Justice, however, the new Attorney General (Homer Cummings) was astute enough to recognize that considerable political mileage could be gained from promoting the formation of an expanded, re-equipped, federal policing force as a solution to the ongoing lack of public faith in local and state law enforcement. In 1933, Cummings portrayed the Kansas City Massacre, in which four lawmen (including one bureau agent) and their handcuffed prisoner (Frank Nash) were shot down while stepping off a public train, as a direct challenge to the federal government's anti-crime crusade. He then skillfully utilized public uproar over a jailbreak by another Midwest gangster, John Dillinger, to speed a series of crime bills (including the power to ware guns and to offer rewards) through Congress during the spring and summer of 1934 (Powers, 1983; Breuer, 1995).
So, as the National Recovery Administration was sanctioning various "relief" programs such as the AAA and CCC for their "war" against the depression, the newly endowed federal Justice Department was initiating a country wide war against the notorious public enemies such as Dutch Schultz, Al Capone, and John Dillinger. It was intended that the Attorney General would remain as the functional human link between the public's crusade against crime and the New Deal's political program of national unity. But this was not to be the case because in forming the new FBI, the administration had necessarily relied heavily upon the prior decade of G-man image-building orchestrated by the director J. Edgar Hoover (in his former capacity as head of the Bureau of Investigation).
This interplay between political versus bureau control, and the related concerns over public image management were also being played out in the substance, form, and goals of other New Deal programs (including the youth-oriented CCC and NYA). Another impetus of change in the mandate of such domestic programs, however, was the still wider arena of foreign policy. From 1936, onward, Roosevelt and his administration had been moving cautiously to break down long-standing American isolationism on the grounds of national (and international) security (Jonas, 1966; W. Cole, 1983). This national defense theme was especially cogent to the CCC military "preparedness" initiatives after 1939 and to the slightly earlier "ideological maintenance" initiatives put in place in so-called NYA resident camps (Reiman, 1992). We will take up each of these programs in due course.
College Entrance and Progressive Public Education
Given their perpetual endowments, the better established higher educational institutions did not have a clear incentive to transform their programs. Despite the hard economic times, and despite the successively more radical New Deal Youth programs (as described in detail below), the higher educational system continued the conservative structural reforms began in the late 1920s. American universities, in particular, continued in their material growth throughout the whole period between the world wars and were an attractive refuge for out of work white-collar workers (Touraine, 1974). Many institutions did, however, attempt to adjust their entrance requirements in accordance with pre-depression administratively motivated "progressive" trends. This was particularly true of high school and college level institutions which were thereby able to continue a rise in attendance at the pre-depression rates (Tyack, et al., 1984, pp. 144-150; USBC, 1975, Vol. I, pp. 368-374).
The SAT and GRE
One indication of how this conservatism expressed itself in tough times is the continued expansion of use of objectified standard entrance exams for higher education (Cheydleur, 1937). From 1926 to 1935 the number of candidates taking the older essay style written exams each year declined from over 22,000 to fewer than 14,000 but in the same period the number who took the objectively formatted SAT increased from 8,040 to 9,437. Moreover, 3,000 SAT examinees in 1935 did not take any of the Board's written exams because some colleges were now requiring only the SAT (Valentine, 1987; Owen, 1985).
The older program of written exams was on a downhill slide partly because it was now recognized that examination costs could be "cut in half by the use of objective tests" (Brigham, 1934 In Valentine, 1987, p. 40). After the introduction of the Graduate Record Examination in 1936 it was used experimentally by the College Board to augment essay-based college admission tests to some graduate schools prior to W.W.II. There was also considerable talk about merging the various competing business factions in the objective testing industry (Lagemmann, 1983). These included: the College Board, the Carnegie Foundation for the Advancement of Teaching; the American Council on Education, and Woods' Cooperative Test Service.
Ben D. Wood was hired by the College Entrance Board to run trial applications of the Graduate Record Examinations (GRE) which originated in 1936 under Carnegie Foundation funding and under the direction of Wood's Cooperative Test Service. The GRE is essentially an upward extension of the SAT (i.e., a higher level of difficulty for each question). Despite later claims to the contrary, the successive introduction of the SAT and GRE as official college entrance requirements functioned to deter those without appropriately conservative (elitist) high school and college experience. They were put in place to increase the reliability of selection of students and hence to maintain the societal stratification of those institutions (a.k.a. the "status quo"). Standardized testing requirements were certainly not intended to be (as later claimed) creators of opportunity or equality of access to higher education (Owen, 1985).
The Eight-Year Study (1933-1941)
In distinction to these conservative trends, one indication of the partial continuance of child-centered progressivism was the Eight-Year Study. During the early 1930s, when lower ranked colleges were hurting for students, they became more receptive to ongoing complaints from progressive educators about how colleges requirements were dominating the high school curriculum. Conceived initially as a response to the problems of college admission, the Eight-Year study also became a landmark in the movement to update the secondary school curriculum toward so-called Life-Adjustment Education. That is, to broaden the typical public school experience to address the needs of the 60% of students who were neither planning to enter university nor go directly to work after high school (Spring, 1988).
In 1933, over 200 colleges and universities agreed to waive the standard entrance requirements for students from 30 selected progressive high schools on an experimental basis. Of the large Eastern colleges only the ultra-exclusive institutions of Harvard, Radcliffe, and Yale refused to participate (Aikin, 1942; Leigh, 1933; Zilversmit, 1993). Between 1933 and 1941, the selected schools were allowed to make whatever curricular changes they deemed in the best interest of their students. These schools, however, were hardly a representative cross-section of the American secondary education system. They had been selected by a Progressive Education Association committee in 1932 because they were already well known for their quality as "progressive" schools. Thirteen were private schools, six were laboratory schools connected with universities, and eleven were public high schools with innovative systems (Leigh, 1933).
The dice was thus loaded in favor of the "success" and the study. It was also funded by an enormous grant from the Carnegie Corporation ($70,000) and the General Education Board ($622,500) and assisted by an evaluation team led by University of Chicago professor Ralph W. Tyler (Tyack, et al., 1984). The effectiveness of experimental curricula were judged in terms of how well their graduates did in college. Each graduate in the study was paired with a comparison college student from a high school not in the agreement. Not surprisingly, given the upper middle-class backgrounds of the students, the selected graduates did well in terms of both academic and nonacademic ventures. There was no significant overall difference in the scholastic outcome of the experimental school group from the comparison group accepted from conventional high schools (Krug, 1960; PEA, 1943).
However, among the 30 selected schools, there were considerable variations in the degree of curricular departure from conventional programs. Additional analysis by Chamberlin (et al., 1942) indicated that these differences actually favored the success of students from the most experimental schools:
"The graduates from the most experimental schools are characterized not only by consistently higher academic averages and more academic honors but also by a clear-cut superiority in the intellectual intangibles of curiosity and drive, willingness and ability to think logically and objectively, and an active and vital interest in the world about them....The students from the least experimental schools are, on the other hand, seldom distinguishable [sic] from their matches" (pp. 173-174; Quoted in Krug, 1960).
For Max McConn (1942), preface writer for the Chamberlin volume, the case against curricular essentialism (e.g., Bagley, 1938; Spaulding, 1938) seemed to be fairly well sown up. The Eight-Year study results were favorable to the more experimental curriculum:
"From now on if any individual dogmatically asserts that the traditional program is essential for college success, he can be politely assured that he is talking nonsense, ...and can be counseled to consult the evidence before conversing further on this topic" (pp. xxi-xxii; Quoted in Krug, 1960).
The results of the Eight-Year study, and their contemporary interpretation possessed the makings of an argument against the eventually universal application of college entrance exams (such as the SAT and GRE). To better understand why subsequent testing history unfolded the way it did, we must therefore look in more detail at the contrast between the New Deal Youth Programs (which emphasized guidance and universality) and the ongoing psychometric testing tradition (which emphasized selection and segregation).
Youth-oriented New Deal programs:
Physical and Ideological Maintenance
For the vast majority of youth, the frivolous lifestyle of the 1920s youth culture ended abruptly when the economic depression struck. Unemployed and out of school youths now became a central issue. Like the other marginal groups -such as Black Americans- youths were the last hired and the first fired from the depression era workplace (Spring, 1994).
The first federal emergency relief program for youths began in 1933 with the establishment of the Civilian Conservation Corps (CCC). This program involved housing unemployed youth workers in camp settings run according to quasi-military discipline (Hollingsworth & Holmes, 1969). Initially, only youths between the ages 17-23, not presently attending school and with families on assistance were eligible but the latter criterion was later relaxed. Enlistees received housing, clothing, food, and a wage provision of one dollar per day. The minimum term of service was six months and the maximum two years. In 1934, modest funding for college and university student education through the Federal Emergency Relief Administration (FERA) was put into place, and in June of 1935 this was extended to high school students when Roosevelt established by Executive order the National Youth Administration (NYA).
The CCC and NYA were both conservative relief ventures in that they followed (with different emphasis) the two ongoing themes of physical and ideological guidance for the promotion of democratic values. Richard Reiman (1992) has pointed out that the CCC reflected the older 1890s idea that youths require physical conditioning and group organizational structure to pass successfully toward adulthood. This was the same idea reflected in the Boy Scout movement in Britain 1908 (extended to the U.S in 1909) and in the Wodervogel movement in Germany in 1901. The criterion for success of the CCC program was predominantly one of physical and mental toughening (i.e., the transformation of scrawny enrollees into strong and healthy well-disciplined workers).
In contrast, the NYA was formed on assumptions that took into account that this was a novel historical age in which middle-class youths were routinely the intellectual equal of their parents at an early age. Thus the pure form of the physical doctrine (by which youths are placed in a natural setting to allow the "instinctual" seeds of adulthood to germinate) was rejected in favor a program that increasingly involved both: (1) physical and mental toughening; and (2) ideological maintenance of democratic values. This latter combined approach was more in line with the "muscular Christianity" of the Young Men's Christian Association (YMCA) and to a lesser extent with contemporary European youth movements (Macleod, 1983). Whereas the CCC stressed the value of conservation projects for the nation and of learning the discipline of work for working-class youth, the NYA embodied a concerted effort for the vocational adjustment and ideological maintenance of lower-to-middle-class youths.
Civilian Conservation Corps (public image, daily reality)
The public image of the CCC was carefully managed in the early days of the New Deal. "It was partly a legend built by and for a middle-class America that wanted to believe in itself again, to think that an era of exploitation was over, that now the society was conserving, not wasting, its resources and that worthy young men once again had a chance if they worked hard" (Tyack, et al., 1984; p. 117). But the actual enrollees were hardly Eagle Scouts. They tended to be irresponsible, unhealthy, unclean, and without what the middle class would call mental or moral stamina. About a third were from broken homes; most from rural backgrounds; and they had completed an average of eight or nine grades. Few of them had held any (let alone steady) previous employment positions.
As initially planned, these lower-class CCC youths were to learn the values of liberal democracy by carrying out useful civil works which would help support that democracy during a time of national crisis (Oxley, 1940). The construction projects included: parks, and campgrounds; dams and lakes; regenerated forests and farmland; wildlife refuges and wilderness areas; historic sites, lookout towers, trails, roads, and bridges. In addition, millions of acres of land were mapped and surveyed by CCC enrollees (see fig 39).
Figure 39 Education in the CCC work camps. Exposure to nature and to organized adult discipline was the order of the day for lower-class in CCC work camps. The authoritarian military officers and technical advisors from the U.S. Forest Service in charge of the camps, provided a regime of low-level (and fundamentally conservative) educational activities characterized by basic drill in literacy skills, a strong emphasis on discipline and the values of hard work, and lessons in "common courtesy." The first CCC camp was opened near Luray, Virginia, on April 17, 1933, followed by many other camps in various parts of the country. Through July 1937, this youth organization had constructed more than three million small and forty thousand large dams to prevent soil erosion; fought thousands of forest fires; and had planted millions of trees. Thus was carried out the "molding" of the character and bodies of 3,000,000 lower-class Americans. Most of the enlistees were youths, but 225,000 of them were W.W.I veterans, who (organized into their own companies), were given a chance to rebuild their lives in the camps. They played a major role in controlling the "Dust Bowl" of the nation's central region. In a massive effort to help stop soil erosion, the 3C's boys planted windbreaks, or shelter breaks strategically placed across the Great Plains from Canada to Mexico (see Hill, 1990; photo from Tyack, et al., 1984).
To speed up building and administration of the resident camps, the Army was put in charge with the Forest Service and the National Park Service helping to plan and implement the construction projects. The American Federation of Labor initially opposed the CCC in the spring of 1933 because it might conceivably provide a readily available force of militarized strike-breakers. In a bid to address these fears, Roosevelt appointed an official of the Machinists' Union, Robert Fechner, as the CCC Director (Gower, 1967).
Not only was the CCC to be an instrument for the conservation of natural resources, it was also to perform an education function (see Hill, 1935). There were three major classifications of courses taught in CCC camps: (1) remedial classes (for those who were illiterate or had only rudimentary schooling); (2) vocational courses; and (3) liberal (non-vocational) instruction. The various groups in the camps differed as to which of these would be given relative merit over the others. It was routine for each camp educational adviser to recruit course directors from: available Forest service and Army officers; experienced corpsmen; as well as from local professionally trained public school teachers. These were courses specifically designed for the poor and staffed largely by people already on relief. They were neither based on the canons of professionalism nor designed by certified experts.
The Army took their traditional mental and physical discipline approach aimed at teaching the boys how to do an honest day's work (Sloper, 1940). Reveille at 6:00 A.M. was followed by physical training, barracks inspection, and hard work broken only by meals and recreation or training in the evening. One in five enrollees deserted or was dismissed from their camp; the actual average term of service was only about nine or ten months. Accordingly, some idealistic federal education officials and many of the national public school leaders argued that the CCC camps were too authoritarian and uninterested in educational activities that would promote subsequent participation in democratic society. In its evaluation of the CCC, the American Youth Commission commented that the Army's mode of operation produced an authoritarian atmosphere in which real exposure to democratic principles was impossible (see also Marsh, 1934).
It is important to note, however, that even the front line professional camp educators hired by the CCC, realized that education and training in the camps should not mirror those of "progressive" public high schools. The enlistees were mostly wash outs from the public school system and wanted no part of the standard forms of schooling. By combining work and vocationally relevant study, the CCC hoped to (and did) reach this clientele (Sloper, 1940; Williams, 1940). Knight (1951), for instance, reports that in all the CCC aided more than 80,000 young Americans whom were totally illiterate when they entered the camps.
By 1937, the relief provision for new recruits had been removed. Although youths with parents on relief were given preference, young men from more financially secure families were now allowed to enroll. The initial CCC function as a welfare agency had become outmoded. It was now ready to concentrate more fully on its other official goals: (1) to function as a vocational training vehicle and entrée into the slowly recovering economy; (2) to make enrolled men into self-supporting and useful members of society; and (after 1939) to (3) prepare these youths for modern mechanized warfare (Oxely, 1940).
The NYA (Its political and ideological mandate).
Part of this debate over education in CCC camps had to do with the ongoing turf war between the Roosevelt administration and the thoroughly conservative NEA. New Dealers like Aubrey Williams (Director of the NYA), were convinced that few public school teachers would have the courage to teach the truth about the injustices of American society, that educators regarded many youths as uneducable, and that the public school system did not equip either lower-class youths or adults to cope with the massive dislocations of modern society (Williams, 1940; Judd, 1942; Salmond, 1983; Tyack, et al., 1984). In 1934, NEA officials began openly complaining in the pages of the NEA Journal and in Phi Delta Kappan that the New Deal was neglecting public schools.
Since the Roosevelt administration had rejected blanket "general" appropriation of federal aid to state in favor of the creation of relief agencies that would specifically assist the poor, it was accused of being class-conscious and (by implication) un-American (e.g., Givens, 1935). The New Dealers, however, viewed their own overall approach as reforming standard educational practice and rendering it class-neutral. In the interest of political expediency, however, a conspicuous role for certified teachers was drafted into the National Youth Administration program (aimed specifically at lower middle-class youth).
The NYA was ostensively designed to: (1) provide part-time employment for needy secondary school, college, and university students aged 16 to 24; and (2) to provide work experience for high school graduates with families on relief. The NYA's employment of young people on school oriented work projects and university campuses (starting in January of 1936) relieved much of the political pressure that gave rise to the agency. The way had been cleared for such specific NYA job placements when the U.S. Commissioner of Education, George Zook, pronounced a successful conclusion to the experimental one-year FERA sponsored work-study program at 51 colleges and universities (during the 1934-35 term). Federal work relief would now help students continue their education and would offer fiscally embattled college presidents with a short-term labor force at no expense to themselves (see fig 40).
Figure 40 NYA student work placements. Guidelines for these student jobs allowed a maximum of twenty dollars per month during the academic year and stipulated that such jobs must be new (i.e., not previously budgeted for by the educational institution). They also mandated that the college and universities must select "eligible" students who carry three-fourths of the usual academic program and that the institutions must actively supervise the student work assignments (photos from Lindley & Lindley, 1938).
The continuance of the NYA program from 1936-1943, was partly due to the fact that its increasingly ideological content opened up possibilities to test the middle-class reception to the administration's wider re-alignment toward national security issues. On this foreign policy front, advocates of democratic ideological maintenance, both within the New Deal administration and from the wider community had made successive warnings that ideological radicalism in European youth was on the rise and that special programs for American youth should be brought into place.
Youthful discontent was certainly nationwide by 1935 and fears were expressed within the New Deal administration that some Pied Piper, some domestic fuhrer, might soon produce undesirable changes in the American way of life (see Davis, 1936; Fass, 1977). The burning question was now whether the New Deal ought to portray the NYA as an explicit democratic indoctrination alternative to the European totalitarian youth movements. That is, should it present itself as an ideological institution that was explicitly designed to justify the faith of young people in the fundamental richness of democratic institutions?
Resident NYA Camps (Ideological maintenance).
Roosevelt initially rejected various ideological management overtures made by both New Deal insiders (such as Robert Wagner; A. Williams; and Charles Taussig) and by pragmatic outsiders such as Owen Young (Chairman of the Board of General Electric). In the face of these rejections, the initial NYA Deputy Director (Richard Brown) was bound to administer the NYA purely as a relief program for its first year (Loucheim, 1983). Even though the NYA would soon become an increasingly more ideological endeavor it would not do so explicitly. The program continued to be portrayed as a call to national service for the lower middle-class.
But, by June of 1936 the organizational tide had turned in favor of ideological maintenance initiatives when disturbing events accompanying the rise of German fascism -including the growing number of Jewish (or so-called non-Aryan) economic émigrés seeking placement in the United States. At this time, Aubrey Williams, as Executive Head of the NYA, was permitted to explore previous FERA sponsored "resident" summer camps for women as a possible prototype for future ideological NYA initiatives aimed at: (i) blue collar working women; (ii) rural youth; and; (iii) European refugees. Thus, between 1937 and 1938, as FDR was attempting to move the political center leftward, the NYA's resident training centers began to dot the countryside (Dallek, 1979).
The resident camps for women, used as a prototype, were partially a continuation of the earlier FERA program of resident schools launched in 1934 by FERA and transferred to the NYA in 1935. However, there was a considerable shift of emphasis in the educational aspects of the camps away from the earlier emphasis on the FERA based "understanding" toward an emphasis on the NYA strategy of education for vocational and societal integration. As Reiman (1992) points out, the FERA camp women required job skills, not a discussion of Sherwood Anderson's Puzzled America -a book used occasionally at the resident schools (p.149). Ironically the actual FERA women seemed particularly susceptible to the anomie which the NYA officials now hoped to avoid. On the positive side, however, the FERA women's program achieved in some camps what the CCC dared to permit in only the rarest cases: racial integration (see Sitkoff, 1978; Rose, 1994).
The second target group for resident camps, rural youth, were to be relocated to rural camps where group organization, fresh air, industrial education, and citizen training would be provided (Burns, 1989). By 1937 the NYA began setting up ambitious industrial projects and this development promised to be of particular usefulness to rural youth. This is because regional differences in the U.S. still meant that many rural youths (living in isolated communities) had nowhere near the exposure to modern industrial advances. These youth also had little opportunity to learn about what the structural underpinnings of American society might mean for their home communities and for their personal vocational future.
The emphasis of ideological maintenance for resident rural youth was on guiding youth into the realistic vocational pathways of industrial, not agrarian, America (Reiman, 1992). This would be accomplished through a combination of work placements and formal classes providing workshop experience in steam heating, plumbing, cabinet making, auto mechanics, sheet metal, welding, and commercial foods production. For example, NYA youth attending day-time work placements in automobile shops would attend classes on the theory of the internal combustion engine in the evening. Business English was offered, shop mathematics became standard fare at the newer centers during 1937 and 1938. This substitute socialization program (involving work, education and citizenship training) was aimed at integrating the rural youth back into wider society.
But by 1937, it was already too late for such an ambitious economically based resident plan to achieve fruition. The burgeoning hopes for a benevolent ideological maintenance emphasis of the resident programs were cut short by the actualities of overwhelming national defense needs. NYA planners, who initially considered the example of Hitler in 1934, now recognized that the ideological aspects of youth programs would not be aimed at forestalling the rise of a domestic dictator. They would, instead, be aimed at helping the country prepare American youth to face the present threat of Nazi expansion in Europe.
Part of the testing ground for this new national readiness mandate was then carried out through the inclusion of a small number of European refugees in the existing NYA resident camps. Roosevelt had to reckon with a tide of "nativistic nationalism" an old, seldom dormant, and easily awakened American tradition (see chapters 2-4). In addition, the argument that refugees take jobs from "native-born" Americans was especially pointed during the "Roosevelt recession" of 1938 (Polenberg, 1966). Patriotic and veterans' organizations, for instance, were fiercely opposed to any efforts to admit more European refugees (Breitman & Kraut, 1987).
When the Nazis embarked (in November 1938) on an active physical intimidation campaign against European Jewry (under the rubric Kristllnacht "night of broken glass"), domestic criticism of possible refugee assistance was only briefly muted. The administration's political balancing act between pro and con organizations necessitated the President's Advisory Committee on Political Refugees which resulted in the relaxing of visa requirements (by reducing the period for which private groups would be required to financially assist incoming refugees).
During this window of opportunity, the New Deal administration also moved to further break down isolationist resistance by transforming the public image of refugees from a faceless amorphous mass of aliens to a familiar group of individuals soon to become Americans. The main vehicle for accomplishing this objective was the placement of previously sponsored refugees into the NYA resident centers (Baumel, 1990). As so often in politics, the needs of those to be rescued would await the cultivation of American public opinion. In this particular case, too, the results of the ideological mandate were doubly ironic. Most of the refugee youths who ended up in the rural camps were from big-city backgrounds and had far more education than the NYA youth with whom they lived and worked (Reiman, 1992).
Ending of the Youth Programs (War preparedness begins)
From 1939, onwards, the all-out conversion of the resident centers into training centers for national defense cost the NYA its initial emphasis on job training for the sake of the enrollees themselves. This was the year in which war between Germany and England was declared and a Western Front of German military expansion was successively widened. In April 1939, in a major government reorganization effort, FDR asked that the NYA be place in the Federal Security Agency (FSA) where it would receive appropriations directly from Congress (not the WPA). Between 1939 and 1943, the NYA camps participated first in the defense program and then in implementation of war production efforts. Average enrollment in the 595 operating centers in 1940 was 27,685, or about 10% of NYA youths nationwide.
In 1941, the NEA sponsored "Educational Policies Commission" (EPC), issued a report on the CCC and NYA. These educators recommended that the two agencies be discontinued and their functions transferred back to the public school system. In their view, the "youth problem" (i.e., the gap between the average school attendance and the requirements of the workplace) could be solved when the federal government backs off and lets the schools extend their curricula to help all young people attain vocational competence. If sufficient support was now given to the public school system, there would be no out-of-work youth (EPC, 1941).
Judd (1942) counter-argued the EPC report by pointed out that the real youth problem did not lay in the lag time between youth learning and adult working but in the wider problematic disconnection between education, industry, government, and labor. In his view, instead of continuing the turf wars between federal government and National educational associations, what was needed, was a penetrating analysis of the American industrial system and of the relation of youth to this system (see also Judd, 1940). But this is precisely what (as a matter of disciplinary survival) the educational, ability testing, and industrial psychology communities had been consistently avoiding for two decades (see Wesley, 1957; Baritz, 1960; Gilb, 1966). The debate, as it turned out, was purely academic because real-world events intervened to settle things when Japan bombed Pearl Harbor (home port for much of the American Pacific Fleet). The official American entry into W.W.II called a halt to both the CCC and the NYA.
The New Deal had made a beginning in putting unemployed youth to work, but it reached only a fraction of those who where in need. The sudden demand for labor and troops in W.W.II seemed to end the "youth problem" by eliminating the considerable gap between school completion and work. It also deferred the "deeper" (more critical questions) that had been raised by educational radicals regarding the changing inter-relationship between youth and modern industrial culture and its effects on the nature of the development of human intellect (see Gilb, 1966). By July 1942, 24,074 of the residents were beginning training for war work representing 28.2 % and shortly thereafter, all NYA youths were streamed into the war effort due to the closing down of all non war youth training. In the spring of 1942 the Senate Committee on Education and Labor moved to terminate both CCC and NYA.
The CCC probably did as much to prepare the United States for participation in World War II as any other government agency. Ninety percent of the 3 million CCC enrollees later served in wartime. They were already accustomed to barracks life and they were disciplined (having learned not only how to take orders, but also how to give them). Approximately 50,000 Reserve Officers, for instance, had previously gained valuable experience in the CCC, leading and administering men at the Company level or at District Headquarters. These CCC educational and training initiatives were expanded by September 1940 with such national defense goals in mind.
The emphasis in the CCC, however, was placed on developing noncombatant skills that would be essential to the function of modern military organizations. In 1941, 266,759 enrollees completed units of such vocational instruction. That year, five-hundred enrollees at a time were attending twenty-six Radio Schools learning to become radio operators and technicians. Thousands of CCC-trained cooks and bakers later served in the mess halls of the Army, Navy, and Marine Corps, as did the CCC-trained auto mechanics at military motor pools in the U.S. and overseas (Hill, 1990).
World War II Testing and Training: Psychometric ideology regained
This section highlights the various ways in which vocational and intellectual ability testing were utilized during World War II. By the time America officially joined the ongoing international war effort against the Axis Forces of right wing Nazi Germany and Italy (1939-1945), and against Imperial Japan (1942-45); the economy was already on a wartime upswing due to the Lend-Lease Bill (1941).
Long-lasting alliances were being forged between not only industry and applied psychology but also between the public education system and the older ability testing tradition. A fair but critical account of these emerging alliances is now possible given the cumulative historical record. This section provides a few touch stones for that analysis by mentioning the political/military context of war readiness; the preparation for war by mainstream psychology; and the eventual use of psychometrics in military induction, assessment, and training during W.W.II.
Political and Military Context of War Readiness
The initial tactical and technological trial grounds for World War II took place in Spain and China. Between 1936 and 1938, the Spanish Civil War (in which an elected leftist coalition government was overthrown by the right-wing dictator Franco) allowed Nazi forces to test storm trooper tactics and aerial bombardment of civilian populations (Fyrth, 1986; Aldgate, 1979). Similarly, what was later considered the first battle of W.W.II was fought between expansionist Japanese forces and defending Chinese Nationalist troops near Peking in 1937. The fanatical nature of Japanese incursion was revealed in Nanjing, where 300,000 Chinese civilians were callously slaughtered by Japanese foot soldiers over a six week period (Sun, 1993). The expansionist campaign was supported by a national consensus in Japan that viewed the material resources of the South East Asia as rightfully belonging to a consolidating empire from that region (rather than to foreign -France, British, or American- trading companies).
In Europe, Nazi German forces had annexed Austria in March of 1938 ostensively at the "invitation" of the Austrian government (as a means of staving off trumped up Polish military incursions). German ground forces then marched into Poland on September 1, 1939 (Shachtman, 1982; Prazmowska, 1987). England and France declared war against German two days later. Now that the European "phony war" was behind them, the Nazi war machine launched an all out blitzkrieg ("Lightning war") toward the West capturing France and throwing back the Allied armies to the English Channel at Dunkirk (Perrett, 1983; Calder, 1991). An impromptu cross-channel rescue of Allied personnel from the Dunkirk beaches by a motley assortment of British water vessels ensued.
In America, the earlier signing of both the so-called Hitler-Stalin Non-Aggression Pact (on August, 23, 1939); and the later "Tripartite Pact" (in September 1940) between Germany, Italy, and Japan, were successive blows to those who advocated further isolationist appeasement of European Fascism. The unexpected collapse of Allied forces in France brought with it the possibility that the British Navy might soon fall into the hands of the Nazis if extensive American intervention in the Atlantic region was not forthcoming. The predominant American military strategy to this point had assumed that Britain and France land forces would be sufficient to contain the expansion of Nazism in Western Europe (allowing the Americans to take a manageable offensive posture against expansionist Japan in the Pacific). The envisaged limited American aid in the Atlantic region was no longer tenable.
The federal Lend-Lease Bill (of March, 1941) allowed the President to sell, transfer title to, exchange, lease, lend, or otherwise dispose of any defense article to any nation whose defense he deemed vital to U.S. security (Dobson, 1986). Extensive conveys of ships would now pass across the Atlantic to aid in the Allied defense of Britain. As American factories retooled for war, a total trade embargo against Japan was brought into place. This was done as a means of leverage for ongoing diplomatic negotiations regarding the immanent Japanese takeover of French Indo-China (later called Vietnam) and their military campaign in China.
The surprise Japanese attack on the Pearl Harbor (on the morning of Sunday, December 7, 1941) was intended to annihilate the preponderance of the U.S. Pacific Fleet in one swift blow. Although the Japanese attack failed to destroy the primary targets (the aircraft carriers Enterprise and Lexington), it did succeeded in prolonging the defensive posture of the American Pacific Fleet (Gailey, 1995). It was simply a matter of time, however, before American industry could produce enough additional naval power to assume an aggressive posture in both the Atlantic and Pacific theaters of war.
Pearl Harbor was a monument to the strengths and limitations of American preparedness in terms of military intelligence and warfare technology. On the plus side, deteriorating political negotiations combined with overseas naval movement reports (by Naval Intelligence); and with domestic telephone and radio surveillance of the Japanese embassy in Washington (by the FBI), led the U.S. to expect an imminent attack on American forces in Indo-China with possible concurrent espionage activities in other locations. In accordance, an important precaution was taken at Pearl Harbor: the resident aircraft carriers were ordered to leave port along with half their Army aircraft (Parkinson, 1973). However, despite frequent allied air-patrols in the region, and despite use of new experimental radar (by Army Intelligence) the Japanese Strike Force (including all six carriers) had sailed unopposed into Pearl Harbor.
Manning the American war machine (Vocational retraining)
The noticeable changes in industry during W.W.II paralleled those of W.W.I. Manpower shortages. These shortages led companies to emphasize the assessment for training of their available workers (Metz, 1942; Connery, 1951; Nash, 1989; Gillespie, 1991). Virtually every factory installed apprenticeship and upgrading programs and many of them utilized vocational tests (provided by the War Manpower Commission) to guide those training programs (Flynn, 1979). The interesting aspect of this testing was that it took place after the worker was hired. Thus, the vocational assessment tools were intended to function as one indicator of the most likely route of efficient training (or placement) within a company rather than as a means of employee selection per se. Such post hoc testing and counseling programs became the order of the day for war industry work (Tiffin, 1942; Super, 1942; Blum, 1976).
Better work opportunities for Black Americans and women in general were also produced by wartime munitions contracts. Nearly a million housewives took jobs producing airplanes, tanks, large guns, cargo ships, or small arms ammunition for the war effort (Milward, 1977). Price controls and rationing ensured the cost of living did not increase after mid-1943. Better jobs and more overtime increased wages for women by roughly 70 percent (Litoff & Smith, 1996).
Just as in W.W.I, so-called "Negro" workers also poured into defense industry jobs. While their generally low vocational status reflected the inevitable results of past inequity of access to basic education, many Negro war workers were no worse educated than their White counterparts in better vocational positions. Despite the ongoing purely discriminatory practices of management and unions of the age (including differential placement and wage scales) Negro workers were at least able to obtain a sound foothold in both the industrial workplace and in segregated Army training programs. On the domestic front, Negroes became organized as a formidable political force through their "Double V" (victory abroad, victory at home) program, orchestrated by the Black press (Lichtenstein, 1982; Cook, 1964; Washburn, 1986).
Formerly wayward American youth, cast adrift by the depression era economic calamity, were also now provided with opportunities by entering into military service. In particular, volunteers for service could pick which branch of the service they preferred: Army Air Force, Navy, Marines, or Coast Guard. After Pearl Harbor there were floods of volunteers for the Army and Navy air corps. Women were also allowed to enlist for entry into noncombatant occupations.
Finally, the war draft itself called nearly 15 million American more or less reluctant American warriors into the armed forces. For these enlistees, basic boot camp was followed by advance training schools erected on college campuses (Davis, 1948; Stouffer, et al. 1949). For Naval Reservists, Army Militia, and college-trained R.O.T.C. officers, however, combat came much more quickly (Stuit, 1947). In due course, nearly 19 million Joes and Janes enlisted -11 million in the Army Infantry and Air Corps; 7 million in the Navy & Coast Guard; and nearly 700,000 in the Marine Corp.). They saw action and provided support in the Pacific, Europe, Asia, and North Africa theaters of war.
Psychologists Prepare For Another War
World War II brought academic and applied psychologists into direct contact with each other and with the war effort. The operation of increasingly complex weapons of war, such as high-speed aircraft, required specialized skills. The need to identify and classify recruits who already possessed the required skills (or who might readily learn those skills), led so-called military psychologist to apply and refine the psychometric techniques of test reliability and predictive validity. By Victory against Japan Day (V-J Day), more than 1,100 classification officers and interviewers (enlisted personnel) had been trained and assigned to provide classification services at more than one hundred Naval facilities ranging from stateside recruit training centers to forward area personnel distribution points. This was, therefore, an era in which the administrative ideology of psychometrics was reasserted (and somewhat vindicated) in a modern military-industrial context.
Depression era testing developments
The psychometric subdiscipline had survived the great depression era by: (1) further embracing the sociology of industrial management (including opinion polling of workers in factories and vocational guidance in schools or government settings); and by (2) refining statistical standards for test validity and reliability (including standardized educational or college entrance exams and cross-sectional or longitudinal research designs). As indicated below, both of these subdisciplinary coping mechanisms would be extremely advantageous in the W.W.II military setting (in which the primary concern was to select, classify, and train recruits as efficiently as possible).
During the early 1930s, the twin horrors of economic depression and the Congress of Industrial Organizations (C.I.O) rose up to haunt industrial management (Slocombe, 1937; Walsh, 1937; Pressey, et al., 1939; Zieger, 1995; Rosswurm, 1992). From a sociology of management perspective, these trends toward labor rights would have to be rendered vulnerable to attack by way of a fuller understanding of employee's thinking, attitudes, and social actions. Hence, industrial psychologists and personnel departments continued to play a role in the depression era restructuring of larger companies (Viteles, 1932; Keller, & Viteles, 1937). At the beginning of the depression, 31% of 302 companies surveyed had "industrial-relations departments" or at least directors, and more than half of companies employing over 5000 workers had such departments (Baritz, 1960, p. 119; Stevens, 1936).
The ongoing growth of union membership meant that seniority, not necessarily skill, would take precedence in rehiring workers who had been laid off (Galenson, 1960). For management, this simply meant that greater care would have to be taken in the original selection of incoming workers (Miller & Form, 1951; Scott, 1954). While several of the larger industries were already being organized by unions (with sit-down strikes, walkouts, and intimidation being used) employers not yet approached by organized labor were active in searching for dispositionally peaceful employees.
Similarly, attitude research and training programs might become useful in forestalling the leftward movement of existing personnel of a large company (Hoppock, 1935, 1938; Achilles, 1932; Kornhauser, 1946).
The related, more subdisciplinary centered, emphasis on statistical standards was ostensively designed to increase the reliability of test results by either: (1) decreasing the error of measurement (e.g., multiple-choice vs. older essay test forms); (2) factoring out the effects of prior learning (in so-called aptitude tests); or conversely (3) by specifically assessing the long-term effects of training on test performance (in so-called achievement tests). Various articles and book length treatments came out during this period dealing with how these issues applications in both the psychology and educational subdisciplines (see Lincoln & Workman, 1935; Guilford, 1936; Hawke et al., 1936; Hunt, 1936; Wrightstone, 1936; Anastasi, 1937).
Of particular note, however, was the work of Oscar K. Buros who (in 1935) began an ongoing independent compendium of mental tests called the Mental Measurements Yearbook which, by 1940, had grown into its modern form. Commissioned reviews of recent tests were provided and each test was only reviewed again after substantial empirical changes had been made. Buro's original hope that the periodic M.M.Y. volumes would help foster ongoing improvements in the statistical reliability and validity of testing practices has been hampered over the years in various ways (Jensen, 1980; Mitchell, 1984; Snyderman, & Rothman, 1988; E. Hunt, 1995a).
Similarly, since all of the contemporary cross-sectional and longitudinal studies adhered to the invalid additive conception of human intellect (as nature plus nurture), psychologists were left with an administratively useful but ontologically vacuous scientific methodology (upon which both their increasingly reliable measurements and their theoretical conclusions depended). The very resurgence of the nature-nurture debate in the early 1940s indicates that the underlying methodological hypothesis of latent intelligence was alive and well in the immediate pre-war period. Such a psychology of intellect could not lead to a deep understanding or a resolution of issues which faced teachers and New Deal government officials on a daily basis (i.e., class and racial inequalities of housing, schooling, and employment). Nor did it have any supportable scientific answer for those who openly adhered to racist doctrines. The latter failing, in particular, is partially due to the fact that their own founding additive assumptions were part and partial to ongoing eugenic and euthenics programs.
Having routinely bracketed (i.e., pushed aside) older debates regarding the production of qualitatively higher (i.e., literate human intellect) from lower forms of human and animal intellect, even well-intentioned social scientists were now left with little that resembled an effective argument against ongoing domestic state sterilization programs or against the growing efforts of Nazi (and Axis power) efforts to rid Europe of "the Jewish race." Nor did American psychologists have a viable argument against domestic right-wing fringe groups such as the German-American Bund party and the K.K.K. (see Howitt, & Owusu-Bempah, 1994; Smith; 1985; Guthrie, 1996).
Two conservative trends in psychometrics (i.e., ontological agnosticism in ability testing and the managerial orientation of industrial psychology), however, did not go entirely unnoticed or unopposed. In 1938 the Society for the Psychological Study of Social Issues (SPSSI), an affiliate of the APA founded in 1936, issued a Statement on Racial Psychology explicitly denouncing the continued use of fictitious Aryan racial category (see Morris, 1984) and the continued reliance on the spurious latent intelligence hypothesis:
"The current emphasis upon 'racial differences' in Germany and Italy, and the indication that such an emphasis may be on the increase in the United States and elsewhere, make it important to know what psychologists and other social scientist have to say in this connection....[The] emphasis on the existence of an 'Aryan race' has no scientific basis, since the word 'Aryan' refers to a family of language and not at all to race or to physical appearance. There is no evidence for the existence of an inborn Jewish or German or Italian mentality. Furthermore, there is no indication that the members of any group are rendered incapable by their biological heredity of completely acquiring the culture of the community in which they live. This is true not only of the Jews in Germany, but also of groups that actually are physically different from one another. The Nazi theory that people must be related by blood in order to participate in the same cultural or intellectual heritage has absolutely no support from scientific findings" (SPSSI, 1938, In Guthrie, 1976, p. 200).
Signatures to the letter included: Floyd Allport, Syracuse University; Gordon Allport, Harvard University; Franklin Fearing, University of California at Los Angeles; George W. Hartmann and Gardner Murphy, Columbia University; David Krech [I. Krechevsky], University of Colorado; T.C. Schneirla, New York University; and Edward C. Tolman, University of California. But such appeals to social ethics were, of course, only as effective as the practical alternatives which backed them up (Kuhl, 1994). These were sadly lacking in contemporary SPSSI arguments. For example, in 1940 the old debate regarding the relative importance of inheritance for human intellect was rekindled in to a formal debate between representative of Iowa under Stoddard and of the University of California under McNemar (Minton, 1984).
Another SPSSI critique aimed this time against the conservative management bias of testing programs was presented in the first SPSSI yearbook (Edited by George Hartmann and Theodore Newcomb), Industrial Conflict: A psychological interpretation (1940). This was put out from an explicitly pro-labor standpoint (Baritz, 1960; Finison, 1979). Several of the contributors pointed out that the dominant conservative climate had seriously infected the social science discipline. This, and successive SPSSI sponsored volumes, would help broaden the perspective of experimentally oriented "social" psychologists during the war -by discussing the causes of industrial conflict or of war, and by actively wrestling with the issues of civilian morale during war and or securing an enduring peace (see G. Watson, 1942; Tolman, 1942; Murphy, 1945). It is also clear, however, that such volumes lacked an explicit systematic methodology by which to differentiate the practical/empirical implications of their own position from that of others. In addition, as indicated in chapter 6, there were both military and political forces building in America that would systematically rid left-leaning solutions from the expanding system of post-war higher education.
Many of the contemporary applied psychometricists and psychologists, had already fully embraced the conservative "data oriented" trends of ontological agnosticism and empirical operationism. They remained entirely unswayed by (or simply ignorant of) unpopular collectivist arguments from the political left. The two rightist trends (of statistical methodolatry and management bias) must therefore be recognized as the dominant trends in the pre and post-war industrial psychology. These trends were reflected directly in other events including: the founding of the Psychometric Society (1935); Guilford's (1936) Psychometric Methods (which outlined his method of Factor Analysis); and also in 1936 the first Invitational Conference on Testing Problems and launch of Psychometrika.
Emergency W.W.II Committees
The 1939 APA meeting was in session when Hitler began his invasion of Poland. A joint emergency committee was then established between APA and AAAP with Walter R. Miles as Chairman (Miles, 1940). A Conference on Morale was subsequently held in Washington on November 2-3, 1940 (Dallenbach, 1946). Similarly, an Emergency Committee appointed Subcommittee on Survey and Planning, (under the chairmanship of Yerkes) met from June 14-20, 1942 at the Vineland Training School.
The Subcommittee was made up of members evenly divided between experimental and applied interests including: Yerkes, E.G. Boring, A.I. Bryan, E. Doll, R. Elliott, E. Hilgard, C. Rogers, and C. Stone. The ongoing disciplinary cooperation between clinical, applied, experimental, and psychometric psychologists, then extended outward to include collaboration between the Psychological Corporation, the Psychometric Corporation (founded 1935), and the College Examination Board in establishing the grounds for a psychological contribution to the ensuing war effort (see Boring et al., 1942; Hilgard, 1945ab; Wolfe, 1946). The overall result was that when Japan finally did attack Pearl Harbor, the American testing community was much more prepared than it had been in 1917 to assist in the selection, classification, and training of newly enlisted personnel. By this time, for instance, various versions of the Army General Classification Test (AGCT) had already began their use.
Military Induction (Procedure and Testing)
With the passage of the Selective Service Act (in the spring of 1940), a Personnel Testing Section was established under the War Plans and Training Officer of the Adjutant General's Office. The Army officials in charge of personnel selection and classification (including Yerkes and Bingham of W.W.I fame) tried to place the entire induction program onto a "merit" foundation (Thomson, 1943; Capshew, 1986). The new recruit, upon induction was given various physical and mental examinations.
In this overall cooperation between medical practitioners, mental testers and applied psychologists, the term "selection" was typically used to describe measures designed to determine qualification for service. The term "classification" was used to indicate potential for success in billet assignments or "Specialized Training" programs (ameliorative or advanced). When a test battery was designed to perform all three functions, however, it was simply called a "qualifying" exam (Stuit, 1947; Davis, 1948; Stouffer, et al. 1949).
With regard to the physical examinations, a surprisingly large number of men had to be refused during the early days of the war. Of the first 2,000,000 examined for induction, 50 percent were rejected, 5 to 10 percent because of illiteracy, but the rest due to a variety of physical maladies and mental defects. Even 25 to 27 percent of the young men 18-19 years old failed to qualify. To illustrate the problem, one induction officer asked the medical examiners to have the boys squat about ten times before taking their blood pressure. To their surprise, they had to help many of the boys up after 5 or 6 squats, and these boys still passed muster! It was expected to take sixteen weeks of physical conditioning in Army camps before other, more specialized, training could begin to transform these sickly or flabby young men into guardians of democracy (Ungland, 1979).
Anyone inducted or volunteering for the Armed Services during W.W.II was given a some form of "general" classification test (AGCT or Navy GCT) to place them into one of five military grades. The Navy, however, supplemented their general test with another test battery (the BTB) which requires a distinct treatment later. Regardless of the service of interest, however, an attempt was made to place each inductee where they might best meet the present wartime need.
Initially, military personnel were classified on the basis of general tests alone. Depending on their particular mental grade (military grade), they were then allotted to: Special Training Units (i.e., illiterates and slow learners); Army Specialized Training assignments (i.e., engineers, doctors, dentists); or Officer Candidate Schools. For instance, after an Army recruit had received basic training of 120 days and passed an Army classification test with an AGCT score of 110 or higher, he was entitled to request mechanized infantry training (Thomson, 1943). However, it was soon found that up to one-third of trainees selected by general test batteries failed to complete their subsequent military courses or training. This was disappointing news for those whom still advocated the use of general intelligence tests for crew selection (see Staff, 1945b, 1945c; DuBois, 1947).
Other personnel test devices (including clerical or mechanical aptitude, and trade tests) were then brought into use for these purposes with varying success. One moderately successful venture was the case of Army and Navy aviation training. In particular, since every washout from pilot training cost $40,000, more specialized pencil-and-paper trade tests and performance tests were devised and refined with respect to their convergent (between test) validity and their overall predictive validity with respect to completion of training (see Stuit, 1947; Cruse, 1951; Stagner & Karwoski, 1952; Frederiksen, 1984; Danziger & Ballantyne, 1997).
Army General Classification Test (AGCT)
Developed by Army personnel technicians, various versions of the Army General Classification Test (AGCT) were completed before the first American draftees arrived at the reception centers. The measure was then used to examine more than 12,000,000 men and women in the combined forces between 1941-1946 (Boring, 1945; Hovland et al., 1949). This pencil and paper test battery was technically similar to the Army "Alpha" (mental capacity) test of W.W.I in that it contained vocabulary, arithmetical, and block-counting tasks. The initial version (yielding one overall score) was subsequently differentiated into a still wider battery of tests yielding four partial scores -arithmetic computation; arithmetic reasoning; reading and vocabulary; spatial relations (DuBois, 1970).
The AGCT was explicitly intended as a test of "general learning ability" (Staff, 1945b, p. 760). During its use, four forms were devised: AGCT-1a was released October, 1940; 1b in April 1941; and 1c & 1d in October 1941. All items were newly constructed for each form except that the same block-counting items were used in both 1c and 1d. The time limit for all forms was 40 minutes, and the raw score was the number of rights minus one-third of the number of wrongs.
Anastasi (1948) describes these tests as: "four equivalent interchangable forms, each requiring about one hour, including preliminary instructions, a fore-exercise and the test proper..." (p. 405). The raw score and so-called Army Grade attained by each examinee was recorded on their Qualification Card in terms of a standard scale in which 100 represents the average score of men of military age (see fig 41).
Figure 41 Expected and Obtained distributions from the AGCT-1a standardization sample. The "expected" distribution (based on the standardization sample of 2,675 regular Army and CCC men) was only approximated by the "obtained" distribution (based on 589,701 cases from November 1940- October 1941) due to the systematic elimination of those with "low mentality" and those with "occupational and dependency deferments" (from Staff, 1945b). They note in passing that it was "not practicable" to include race as a variable in the weighting procedure. In July 1942, the lower limit of Grade IV was "arbitrarily" extended downward an "additional half-standard deviation" form a sore of 70 to 60.
Psychometrically speaking, the specific standard scores cut-offs for each of the five grades on the AGCT1 scales were worked out as follows: Grade I -130 and over; Grade II -110-129; Grade III -90-109; Grade IV -70-89; Grade V -below 70. From the more practical Army point of view, however, the "excess" of Grade V scores was disturbing, because it led to Unit Commanders protesting the allotment of too many low grade men. The Grade V demarcation, therefore, was "arbitrarily" narrowed by extending the lower score of Grade IV downward. While this discrepancy in cut-offs can be attributed to understandable trade-offs necessary during initial test utility, the more questionable subsequent discrepancy between the actual administrative use and the disciplinary accounts of the function (or meaningfulness) of the AGCT scores should also be noted. Here we see the same posturing for post-war marketability taking place with the AGCT as took place with the Alpha tests (see, Harrell & Harrell, 1945, and chapter 6).
The AGCT was used as an administrative device to pre-select men for a large number of advanced (specialist) training courses. For instance, a standard score of 110 or better was a prerequisite for officer training. A score of 115 or better, on the other hand, was required for advanced Army Specialized Training Programs (including Special Forces training). For four and a half years, the various forms of AGCT-1 were used roughly 8,23,879 times in reception centers throughout the country prior to January 1945. In April 1945, the test was "superseded" by the Army General Classification Test-3a composed of four separately timed and separately scored subjects: Reading and Vocabulary, Arithmetic Computation, Arithmetic Reasoning, and Pattern Analysis). Ironically, however, the "over-all score" of even this officially partitioned hierarchy was openly portrayed by Staff (1945b) as a useful "index of general learning ability" (p. 768).
Ann Anastasi's chapter on Individual Differences in the latter 1948 text incorrectly portrays the AGCT in a similar light but also attempts to recognize the administrative (and potential ameliorative) function of testing:
"[T]he proved value of such tests as the AGCT in the Second World War attested to the fundamental soundness of intelligence tests when properly applied....The first consideration in interpreting ...test scores is to remember that no ...test measures native capacity independently of the individual's background of experience. Only insofar as the examinees have had common opportunities for acquiring the same general information and skills can the differences in test scores be diagnostic of future performance....[For example, in the] Army's [remedial] Training Units...men whose initial AGCT score place them in Army Grade V were able to raise their standing to Army Grade IV or even higher....Had the initial classification of these men been regarded as an index of their 'native intellectual capacity' without reference to their poor education and other experiences, the possibility of raising them to Grade IV level would have been overlooked" (Anastasi In Boring, et al., 1948, p. 409).
As seen in the following subsections on Navy and Air Force "general testing" efforts, however, the de facto reification of a statistical-descriptive abstraction (psychometric g) as an ontological reality (a.k.a. general intelligence) is endemic in the structure and conclusions drawn from W.W.II testing research.
Navy General testing (General Classification Test; Basic Test Battery).
In 1942, most Naval recruit training programs, used a battery of tests called the General Classification Test for assigning recruits to Naval Training Schools (Stuit, 1947). Included in this initial "older" battery of tests were: (1) a "general" test consisting of sentence completion tasks, opposites, analogies; and (2) more "specific" tests of mechanical aptitude, arithmetical computation, spelling, radio code, and English. Studies of this initial battery (made in 1942), indicated they were not effective in predicting success of recruits in actual training for Naval ratings such as Electrician's Mate, Fire Controlman, Gunner's Mate, Torpedoman, Quartermaster, or Signalman. A new test battery for Naval selection was then produced (Staff, 1945c).
Early in 1943, a Test Construction Group was established in the Bureau of Naval Personnel to improve the predictive validity of their test battery. By March 1943 a sample of recruits in each of six Naval Training Stations was selected in order to: establish new norms; adjust time limits; and carry out test refinements or additions with respect to reliability of alternate forms and assess the predictive validity of the various tests. Routine administration of the improved "Basic Test Battery" (BTB) began in June 1943 and a formal Fleet Edition of six tests (the General Classification Test, Arithmetical Reasoning Test, Mechanical Aptitude Test, Mechanical Knowledge Test, Electrical Knowledge Test, and Clerical Aptitude Test) was then administered to all incoming Naval recruits. This BTB was eventually administered to over 2,000,000 Naval recruits and the related Officer Qualification and Officer Classification Tests to over 100,000 candidates (see Jenkins, 1943; 1945).
Like the Army's AGCT, the primary use of the Navy's BTB was to administratively shunt available personnel into training schools. Psychometrically derived (rather than theoretically coherent) cutting scores on one or more of the tests (scores below which success in school is considered to be unlikely) were established for 46 types of existing training programs. High scores on the Arithmetic Reasoning Test, for instance, were found significant in predicting success in Basic Engineering and Electrical training, but were of less value than high sores on the Reading Test in selecting for Radar Operator and Fire Control men training.
After initial naval training and assignment to duty, recruit test scores (on one or other of the general tests) were also used in further assignment of men to shipboard stations or billets in advanced training centers. For example, test scores of 55 or more on the GCT were considered desirable for Navy school training as a Gun Captain. All candidates for submarine service were similarly required to have a sore of 50 or above on the GCT (see fig 42).
Figure 42 Naval General tests and Gunner's school. The upper panel shows young recruits taking the Army-Air Force Qualifying Tests. The Navy had its own General Classification Test in 1942 but switched to a newly standardized Basic Test Battery in the spring of 1943. Combined A-12 tests which decided whether or not a given recruit could enter officer training were then developed by the College Board. The nature and overall form of the test had been worked out in consultation with the Navy (photo from Guilford, 1952). The lower panel shows a situational (20 mm gun) Performance Test being administered to trainees in a Naval Gunner's Mate school (photo from Stuit, 1947).
Although the initial emphasis of research was practicably limited to the administrative function of the test battery (i.e., predictive validity for success in training), attempts were subsequently made to psychometrically "isolate" so-called general factors from the Basic Test Battery results. Plans were also under way to for further "outcome studies" in which success in carrying out actual naval duties aboard ship were compared to incoming BTB test scores. This search for general factors was carried out through the use of Factor Analysis techniques. Anastasi (1948) described such research in the following way:
"If every tests shows some significant positive correlation with every other test -not at all an unusual finding- we have evidence of the existence of a general factor, one which is common to all the abilities tested, as 'intelligence' has been thought to be....The principle result of much work with factor analysis is the finding that the tested abilities form groups or clusters....The most clearly demonstrated group factors are the verbal, numerical and spatial. Test of any one of these skills or insights show low correlations with the others..." (1948, p. 412).
By far the more usual use of testing results, however, were to categorize, and keep track of the progress of recruits through various training courses (whether for illiterate recruits, advanced military training, or for Officer training). This "achievement" (success in training) oriented function of tests was emphasized in psychology texts after the war but only in an interactionist (innate intellect plus educational environment) manner. By missing the fundamentally transformative effects of military training itself on recruits, such texts failed to address the issue of active personal (and personnel) advancement observed in the W.W.II G.I. experience. One indication of this is their almost total lack of mention of: (1) the remedial programs for initially illiterate recruits; and (2) G.I. participation in the so-called Armed Forces University (both outlined below). Indeed these transformative aspects of the G.I. experience were largely irrelevant to those who tended to concentrate on purely psychometric issues.
Illiterate Inductees (Assessment and ameliorative training
As stated above, all the enlisted men of the Army were required to take the AGCT and those in the Navy took the GCT or BTB test batteries. However, if an individual exhibited evidence of inability to read or write, or received a conspicuously low score on one of the general tests batteries, they would then take further group or individual classification tests.
Clinical interviews and standardized achievement tests
On 1 August 1942, psychological examiners (civilian psychologists) were assigned to Armed Forces Induction Stations for the purposes of distinguishing registrants who were capable of "absorbing military training at the normal rate" from those who lacked the "necessary mental ability" (Partington & Bryant, 1946). Later in 1942, about 100 personnel consultants (so-called Military Psychologists) were commissioned to the Army Specialist Corps and under the Adjutant General's Department (headed by Walter Bingham of W.W.I fame). Initially, in order to distinguish between inductees who where suitable for service (even though they could not read well enough too meet a standard of "4th year level" of public school reading) from those who were simply not suitable for service (due to mental inadequacy), a clinical screening interview was conducted.
The makeshift standards of evaluation used in the interview included: (1) previously attained educational level (i.e., completion of four years of school without more than one year "retardation"); and on (2) occupational history (with reference to type of occupation, average wage, and length of time in one job). Those who seemed unpromising according to these biographical criterion were recommended as not suitable for service. Those who went on for further testing, however, received a varying and successively improved set of group and individual tests. In particular, one of these individual tests, the Wechsler Mental Ability Scale (Form B), differed little from the full scale Bellevue-Wechsler (1939), but included five additional items for use in the military situation (Altus, 1945a). It consisted of 16 sub-tests (seven of verbal, and nine performance tests). Even this later assessment procedure, however, did not depend solely upon test results alone, but also considered facts concerning each man's emotional, educational, social, and vocational background.
On the basis of these combined criteria, registrants were recommendation for: (a) return to regular unit; (b) transfer to a remedial Special Training Unit; or (c) discharge for mental "ineptness" for duty (Altus, 1945a; 1945b). Near the end of 1942 the more practically oriented Army Information Sheet was introduced as an "objective" criterion for literacy. Registrants who had not completed the seventh grade, and other special cases, were routinely given this test. It was composed of twelve items, including the writing of the enlistee's name, address, and age, and five questions based on paragraph reading.
But, since the Army Information Sheet was found to have too narrow a range of scores for so wide a range of school grade levels, it was then replaced in June 1943 by the psychometrically oriented Mental Qualification Test. This seventeen-item, written test was given to all registrants who were not high-school graduates to sort out those who would require ameliorative educational and military training. This latter group was sent to a remedial Special Training Unit to prepare them to face the demands of the regular Army training program. Those failing this test were given one of the above tests for illiterates (especially the Wechsler test).
The broadening of the testing program to include all non-high-school graduates, the improved literacy test, and the revision of critical cut-off scores increased the volume and importance of the Personnel Consultant's work during the later years of the war. Also, rather than simply making recommendations such consultants could now directly "reject" a registrant for failing to meet the "minimum intelligence standards" (Partington & Bryant, 1946, p. 111).
"To a greater degree than before, subjective [clinical] judgment gave way to scientifically constructed tools correlated with objective criteria of performance.... The Personnel Consultant could now accurately select registrants who were potentially capable of absorbing military training and...reject those who lacked the necessary abilities by using this personnel properly to administer, proctor, score, and interpret the new test battery [and the supplemental individual tests]" (Partington & Bryant, 1946, p. 112).
Army Special Training Units (Ameliorative intervention)
The purpose of remedial Special Training Units was to take "newly-inducted illiterates" and bring them to an approximate fourth year of public school level of literacy within a maximum of twelve weeks time (Altus, 1945a). The classroom part of training typically occupied three hours daily; the remainder being concerned with military training. Classes were organized on different levels in many camps. Private Peter Readers were used as a readily available, carefully graded, series which could be used for men on four functional levels of reading proficiency. Soldiers who made sufficient progress were transferred to a regular training unit as promptly as possible.
Success in training, however, was determined only in part by the soldier's classroom gains. Special texts, including a monthly magazine called Our War, a weekly Newsman Supplement containing simplified written digests of activities on the various fighting fronts, film strips, and detailed teacher guides were all used to prop up the "general information" aspect of each enrollee's mentality. Here, it might be argued were the echoes of the New Deal ideological maintenance. At the very least, an assumption was openly made that military competency called for an integration of schoolbook learning, soldierly attitudes, and historical knowledge (Ruch, 1943).
Various academic placement and academic progress assessment scales were used but the Soldier's Performance Scale, a graphic rating instrument designed to evaluate the attitude, social competence, and overall demeanor of an enrollee was typically used. To help predict outcomes, a tentative oral "adjustment" scale was developed and a 36 item written test of adjustment was then devised (Altus, 1945a; Altus, & Bell, 1945). Although more "maladjusted" inductees tended to be discharged , many of these men did graduate. The same tension between past assumptions about mental capacity and the more practical concerns over assessing the success of training of recruits was also evident in the domain of regular military training.
Testing and Regular Military Training (achievement and prediction)
After taking their initial induction tests and basic military training, recruits with adequate (Army/Navy) general test scores were then trained in combat skills, or in a military specialty or trade. Distinct achievement tests were worked out for any number of Army, Air, and Naval specializations. Their primary goal was to assess the progress of recruits through their training. Other predictive validity based research, however, was also carried out. The latter research, though simple in theory, proved rather difficult in practice. Guilford (1943), for instance, mentions the difficulties encountered in establishing predictive validity criteria for "the" aviation specialty (or rather group of specialties).
During the interwar period, the School of Aviation Medicine at Randolph Field, Texas conducted limited research into flight selection. Prior to the war, the chief qualification for assignment to flight training included two years of college education (or equivalent) and passing a rigid 64 item medical examination (part of which was an interview designed to bring out the personal qualities that might be unfit for such training). No doubt much of the faith in the earlier educational requirement, explicitly or implicitly, was due to the fact that men of that much educational advancement have passed many intellectual hurdles -i.e., the mentally weak had already been screened out (Guilford, 1943). With the beginning of the war, however, training of fliers jumped from training by the hundreds to training by the tens of thousands.
Psychologists were first invited to work only in connection with the selection of pilots. The tests for pilot selection were initially met with skepticism, but were given a fair trial. Minimum qualifying sores were set so as to disqualify a large number of individuals and serve a selection function. As Guilford and other psychometricians have reported it, the selection standards (i.e., criteria for cut-off scores) were under constant adjustment according to the ongoing tension between convergent (i.e., intertest reliability) validity and predictive validity of test criterion.
What this meant in practical terms was that there was an ongoing administrative compromise between the psychometrics of the tests and the needs of the military:
"Many textbooks dismiss the problem of validity with the remark that test is valid if it shows a high correlation with the criterion. Only rarely have psychologists pointed out that, while this is true by definition, the criterion itself presents problems which ultimately merge into metaphysics. Given a criterion which satisfies both the sponsor and the technologist, validation becomes largely a problem of statistical calculation at the level of the junior clerical worker. The only difficulty, to resort to gross understatement, is to locate such a criterion" (Jenkins, 1943, p. 525; emphasis added).
In W.W.II Naval Aviation (and in the other branches of the Armed Services) the basic problem of assessment was not that of absolute ranking of recruits but one of estimating the probability that a given individual will, or will not, achieve a minimal level of usefulness. In practical terms, this implied only the separation of those who pass in training achievement tests from those who fail. The final validation of early achievement batteries would have to be deferred until performance in combat could be assessed (Jenkins, 1943; Fiske, 1946ab; Flanagan, 1947). As the war dragged on, though, psychologists were given progressively more of an institutional mandate to streamline and assess the outcome of their initial predictive validity criteria.
Army Air Force Training (attempts at predictive validity)
Initially, applicants who passed a written Aviation Cadet Qualifying Exam entered a period of basic military training which was followed by five months of instruction in a college or university. Cadets were only then sent to a classification center where they were given psychological tests and assigned to a specific type of aviation training (depending on their aptitude scores and upon current requirements in the Army Air Forces). This procedure had the disadvantage of retaining, throughout a lengthy period, men who would eventually be disqualified on the basis of their test performance.
After late October, 1943, it was decided to perform the psychological classification testing at Basic Training Centers before applicants were admitted to college courses (Staff, 1943; 1945a). This required an immediate expansion in the numbers of tests and therefore of the administrative body doing those tests. For example, slightly more than 4,000 aviation trainees were tested nationally per week in October 1943, but in December 1943, the rate jumped to 15,000 per week. Accordingly, Medical & Psychological Examining Units were set up at seven Army Air Forces Basic Training Centers manned by approximately four times as many psychological personnel. The need for coordination of research and testing activities a Test Operation Unit was added to the command structure at Training Command Headquarters, and the so-called Psychological Test Film Unit (at Santa Ana Army Air Base) was set up to oversee psychological research.
Classification was achieved by way of a multiple cut-off procedure. Candidates first passed the Aviation Cadet Qualifying Examination (later designated as the AAF Qualifying Exam) and then were subjected to a number of performance tests. Some of these performance tests involved generalized motor coordination tasks (including Rotary Pursuit and Discrimination Reaction Time tasks). Others were more situational involving the kinds of specialized psychomotor skills used to operate, dismantle, or repair war machinery (including the Complex Coordinator, and Rudder Control tasks).
During the development of these performance tests, predictive validity coefficients for each test were calculated by correlating scores on the test with the actual graduation or non-graduation from training of a trial group of test subjects (pilots, navigators, and bombardiers). Each performance test was then assigned a mathematical weight according to the differential pattern of performance scores obtained from the three original sample groups (see Whittaker, 1965; Flanagan, 1948). That is, a set of psychometric weights for pilots was determined by combining tests in such a way as to give the maximum prediction in the variability of the success of the sample group in pilot training, a second set was worked out for navigators, and a third for bombardiers (see fig 43).
Figure 43 American Pilot Selection. The panel shows pilot trainees being tested on the complex coordinator the most predictive of the eighteen tests taken by pilot trainees. The red light goes on in each of three sets (upper, lower, and central). By appropriate manipulations of the stick and rudder bar, the examinees must light the green bulbs opposite the red ones (photo from Stagner & Karwoski, 1952; Munn, 1962).
The military, of course, was not interested in the esoteric psychometric details of this statistical weight process but only in the pragmatic issue of assigning men to the three specialties. The basic data employed were therefore presented in tabular form indicated by the elimination rates by "stanine." The term stanine (short for "statistic nine") was coined for the actual combined aptitude scores ranging from 1 to 9 and differentially weighted for pilot, bombardier, and navigator assignments.
The absolute score requirements for a given Aviation Specialty were adjusted according to the current military need. By 1945, for instance, in order to qualify for pilot training, an applicant had to obtain a "pilot aptitude score" of 6. Similarly, the minimum qualifying bombardier stanine was also 6 while the qualification for navigator training depended on obtaining a score of 7. These particular score criteria restricted by half the number of applicants considered qualified to receive training in Air Crew Specialties (Guilford, 1943; 1947).
Data analysis (IBM)
The Statistical Unit of the Psychological Section, Headquarters Army Air Forces Training Command, made use of new technologies for data analysis. The processing of records, maintenance of files, and the analysis of data were for the first time, performed primarily on IBM machines. The raw score data was entered onto punch cards which were filed in various ways in order to expedite collation and data searches. For instance, a given officer's test results were recorded and cross referenced by Army serial number, by officer serial number, by testing number, by class, and by name in order to facilitate efficient data analysis and searches. The Statistical Unit disseminated these in form of rosters and punch cards to be filed in units and psychological detachments outside the Training Command.
While graduation-elimination from training was commonly used during the middle of the conflict, there was, in the latter years of the war, a distinct shift in research away from convergent validity (intertest correlations) or minimum training achievement toward an ideal of administratively efficient predictive validity. To this end, in early 1944, three Psychological Research Projects located at Central Instructors schools were established (under the direction of N.E Miller). This was an attempted shift in emphasis away from the selection of good trainees (i.e., for success in training) toward the section of those who would perform well in combat (see Staff, 1945a; Fiske, 1946). The various potentially predictive measures investigated for the different specialties include the following: grades given in initial ground courses; performance on ground flight trainers; average "circular error" in bombing; scores in fixed gunnery training; rankings made by superior officers during operational (post-graduation/ non-combat) training; and ratings made during combat (Staff, 1945a).
Enlisted Personnel and Higher Education:
Officer training, Armed Forces University, and the G.I. Bill
The historical relationship between higher educational institutions, officer training, and the potential for mental advancement of enlisted personnel during W.W.II is important to note. Officer selection (by way of the College Board sponsored V-12 exams); the further chance of enlisted personal mental advancement through the so-called "Armed Forces University," and the G.I. Bill of rights (which included an educational component) are covered here. In contradiction to the testing format of W.W.II Officer selection, the nature of and logic behind both the Armed Forces University and the G.I. Bill is evidence of a distinct (albeit short-lived) movement away from the standard public education "additive" view of human mentality and toward recognizing the transformative role of higher educational institutions.
Selecting Officers (The V-12, A-12 tests).
When considering the relationship between enlisted personnel and the mid-century American system of higher education, the fundamental functional/ administrative link between the W.W.II effort and the College Examination Board must be recognized. As stated earlier, the Board had abandoned its traditional essay examinations in favor of the psychometrically based SAT and other standardized Achievement Tests. During W.W.II, it threw its staff and resources into the development of the A-12 and V-12 Officer Qualifying Tests, which were administered to 316,000 men in 1943 alone (Owen, 1988). The nature and overall form of the test had been worked out in consultation with the Navy. Ten national regions had been set up under ten Regional Directors, with thousands of school and college teachers acting as supervisors.
The Board also handled (through the Bureau of Navy Personnel) approximately one hundred service jobs, including the printing or reprinting of 133 tests, answer sheets, and bulletins -a total of 36,000,000 pages of material (Owen,1988; Fuess, 1950). It also developed a college admissions test for veterans, entrance exams for U.S. Naval and Coast Guard academies, and the academic scholarship tests for Westinghouse and Pepsi-Cola. Most of these programs were then carried forward after the war under the later established Educational Testing Service (Fuess, 1950).
Despite the context of war the intent of these tests was roughly the same as the interwar educational requirement (for pilots) and as the ongoing elitist push for standardized entrance exams for higher educational institutions: To select examinees on the basis of their previous educational privilege. Certainly the basic structure and content of the V-12 exam was similar to that of the SAT and GRE educational aptitude tests (Crawford & Burnham, 1945). The absolute level of academic performance necessary to obtain an Officer Classification had also increased since the time of the Army Alpha test of W.W.I (see below).
For those who scored high enough on the V-12 exam, their preparatory military officer training took place on college campuses and in this respect, the W.W.II war effort would further expand the context of military industrial control over the higher educational system (see Rudy, 1991). The elitism that Sinclair (1922; 1924) had complained about was still largely in effect in American higher educational institutions. This statement, however, can not be made categorically because an alternative endeavor during W.W.II, the so-called Armed Forces University, had a decidedly transformative (non-elitist) intent.
Armed Forces University (personal advancement for the G.I.).
When basic and specialist training was completed and the men were assigned to inactive theaters overseas, they found ample time to devote to a continuation of education. Getting ahead in the Army and to getting ready for the return to civilian life were two of the selling points for G.I. participation in the so-called Armed Forces University (Benbow, 1944). Founded three weeks after Pearl Harbor as the "Armed Forces Institute" (a worldwide campus for the Armed Forces), its students body, dressed in khaki, blue, and forest green, were soon located in the U.S., Iceland, Australia, North Africa, Alaska, and everywhere Americans fought.
The brainchild of Francis T. Spaulding (Dean of Graduate School of Education, Harvard University) had its administrative headquarters in Madison, Wisconsin. Given the non-elitist emphasis of the program, it should not be surprising that the Director of the Examination Staff, Ralph Tyler (a former University Examiner from the University of Chicago) was also a former Director of the Eight-Year study. The Education Branch recruited experience educational administrators and supervisors training them in Army style education. Twenty-five of these Education Officers were then sent to Service Command Headquarters in the U.S. and to theaters and bases overseas.
The Armed Forces University (officially called the Armed Forces Institute) consisted of three cooperative administrative sections. The Correspondence Section prepared and provided self-teaching instructional materials, examinations for evaluation or certification, and arrangements for accreditation and for university extension courses for U.S. Armed Forces enrollees. A charge of $2.00 for each Army course (and not to exceed $20.00 for any one university transfer course) was levied as a means to keep the program cost efficient. The Group Instruction Section supplied educational motion picture and foreign language recordings to both Service Commands and to overseas forces on request. The Library Section made recommendations concerning the physical needs of the Army Library Service, purchased books and magazines for overseas forces, hospitals, transports, and traveling libraries. By 1944, the Institute's library resources consisted of 2,000 libraries with ten million books.
Under "plan 1," sixty-four courses of high school and junior college level courses were offered. Approximately 500 university and high-school credit courses were also offered by the more than seventy-five cooperating higher educational institutions. Nearly every major field of academia was represented. The catalogue of the Institute, What Would You Like To Learn? was distributed to all units of the Army, Navy, Marine Corps and Coast Guard. Copies were also provided to high school and college guidance officers (Benbow, 1944). Each soldier's military Qualification Card carried an entry for courses completed with the Institute. This was the military counterpart of the civilian "permanent educational record" kept by a civilian college registrar.
The G.I. Bill and Higher education
As indicated above, the Armed Forces Institute courses harped back to the tranformative emphasis of some of the more effective depression era youth programs. Another depression, of course, was almost certain to occur if millions of ex-servicemen returned again to compete for domestic jobs against comparable numbers of ex-war industry workers. Whereas four million soldiers and 30% of the national economy had been tied up in joining the W.W.I effort, some eleven million enlisted men and 70% of the economy would be cut loose at the end of W.W.II.
At the Postwar Manpower Conference (1943) delegates indicated that upon cessation of the overseas conflict, the number of domestic jobs would immediately drop from 63.5 million to 57 million (Olson, 1974). At this time, therefore, Americans were reminded that only the U.S. and Great Britain had escaped government overthrow after W.W.I and that disgruntled veterans had been the backbone of Communist Revolution in Russia, Fascism in Italy, Nazism in Germany, and the Collaboration movement in France (Waller, 1944; Robin, 1995; Bruce, 1989; Rattansi & Westwood, 1994).
Roosevelt's signing on November 13, 1942 of an Amendment to the 1940 Selective Service Act which lowered the draft age to eighteen also left no question that these youths would soon be in genuinely need of finishing or extend their pre-war (and ongoing in-service) education. Roosevelt made his "Message to the Congress on Education of War Veterans" on October 27, 1943, exhorting the need to develop new veterans' legislation, a bill to include educational benefits (Ross, 1969). Something had to be done beyond a bonus, "something that would contribute significantly to a healthy economy, and something that would alley veteran resentment toward government" (Olson, 1974, p. 5). That "something" was the establishment of a G.I. educational benefit. Despite the depression, the average level of G.I. education was already higher than it was at the end of W.W.I. Among W.W.I vets, only one in five had gone beyond grade school and fewer than 5% had more than a high school education. By, 1942, 14% of draftees had gone beyond secondary training. Hence, post-secondary institutions would soon have to take a greater responsibility in furthering the education of returning veterans (Miller & Brooks, 1944; Fine, 1946; Wilson, 1974).
Congress considered initially a proposal that included: medical care for the disabled, mustering out pay, college tuition for a carefully selected few, and vocational training for the rest. The immediate prototype for the eventual "G.I. Bill," however, was worked out by the American Legion in December of 1943. This was a more controversial comprehensive set of benefits dealing with: Education; job training; unemployment insurance; and business or housing loans. It was eventually passed in congress in early June, just as "D-Day" (the decisive Allied invasion of the Normandy beaches) was underway.
The G.I. Bill of Rights was a universal benefit package for all of those who served at least 90 days. It included educational benefits on a day for day basis; guarantees of 50% of business or housing loans up to $2000; and provided for 1 year of unemployment insurance. The universality of the benefits was a particularly distinctive feature of the bill and was especially fitting due to the fact that W.W.II had (and would remain) the most demographically representative war to be fought in the 20th century.
Ending the War:
"A-bomb" testing, their use, and outgoing nuclear tests
After the D-day advance of Allied forces into Europe and the Battle of Midway, the European and Japanese Axis forces where in constant retreat (Keegan, 1982; Neillands, 1995). By the spring of 1945, Allied armies were sweeping toward Berlin (the Soviets from the east, the Americans from the west). In the face of an inevitable advance by the Red Army into Berlin, Hitler had ordered a "scorched earth" policy as a delaying tactic. The destruction of all remaining mines, power stations, railroads, and water supplies surrounding the capital (i.e., all means of possible life or employment for survivors) was his final betrayal of his people.
The political leadership of this world conflict was also changing. In America, President Roosevelt had died of natural causes on April 12, 1945 and was replaced by his little-known vice-president Harry S. Truman. Of all the men who had been president, Truman was one of the least formally groomed for the job. Vice-president for only 82 days and excluded from Roosevelt's inner circle, he knew few details about the war raging across three continents and two oceans. But within 4 months of taking office, Truman would have at his command the most destructive weapon as yet devised by mankind. He relied on all of his past experience (including his W.W.I service where he saw the destructive results of war machinery up close) in the decision to use the Atomic bomb against the Japanese (Walker, 1997; Hutmacher, 1972; Hamby, 1973).
Under the looming conditions of complete military defeat, the Italian and German fascist leaderships soon collapsed. Mussolini was captured by Italian partisans while attempting to fleeing the country in disguise. He was then tortured and executed. Hitler and his Propaganda Minister Gobbles escaped such a fate only by committing suicide in a Berlin bomb shelter. The fall of the Reichstag (defended to the last man by the SS) was the final signal of the end of Nazi resistance and it was left to the remaining German Military High Command to accept surrendered and occupation.
"Victory in Europe" day (VE Day), May 8th, 1945, was elating to the American public but Armed Forces personnel were well aware of the need to remain prepared for the impending prolonged invasion of Japan (Chappell, 1997; Ross, 1997). Although U.S. Forces had managed to defeat the Japanese Navy and Airforce at Midway, American Marines were still fighting from island to island in order to retrieve territory formally claimed by Japanese expansionism. Truman soon reminded the nation of this fact: "The victory won in the West, must now be won in the East. The whole world must be cleansed of the evil from which half of the world has been freed" (see Hutmacher, 1972; Walker, 1997).
"Total War" and the decision to drop Atomic Bombs
Evidence in 1940 that German scientists had already achieved a sustained nuclear reaction led to immediate fears in the physics community that the Nazis might soon develop a nuclear bomb capability. Acting on the advice of top physicists such as Albert Einstein, FDR began funding the top secret Manhattan Project which was tasked with building deliverable nuclear weapons to be used against Germany. By the summer of 1945, three atomic devices were ready and 7 others were under construction (see fig. 44).
Figure 44 Trinity and Atomic Soldiers willingness to fight. The Manhattan Engineer District (a new branch of the Army engineers) was established under the command of Maj. General L.S. Groves and an international team of scientists was assembled at Los Alamos in an effort to put atomic theory into practice. The Manhattan Project (and its aftermath) was both a technocratic race and an ethical test our ability (or willingness) to commit mass destruction of fellow human beings for the sake of overall peace. By early summer of 1945, three atomic devices were ready and 7 others were under construction. The first device, code named "Trinity," was a test model to be fired from atop a stationary test tower in Alamogordo, New Mexico. Trinity's plutonium based atomic explosion was unleashed before sunrise on the morning of July 16, 1945. Its 13 pounds of explosives evaporated the 60 ft steel test tower, left a crater more that 2 miles wide, and knocked down men 10,000 yards away (photo from Copp & Zanella, 1993).
The A-bomb, like all other W.W.II military technologies was developed within the context of "total war." This truly global conflict entailed a redefinition of what was to be considered a legitimate military target. The intentional annihilation of civilian populations within cities (rather than military targets on battle fields per ce) was routinely carried out by all sides of the conflict. By 1945, Allied bombing raids had already destroyed nearly all of Japan's biggest cities and killed more than half a million civilians. For instance, 10,000 lbs of incendiary chemical bombs had already been dropped on Tokyo. The Japanese military leadership, however, steadfastly refused the long-standing American unconditional surrender terms. These terms had been set out first by FDR's administration and disallowed any provisions for avoiding U.S. occupation, war crimes punishment, or reparations demands. They also made no provisions for retaining the Japanese Emperor.
Within this context of uncompromising total war, U.S. military advisors (to FDR and then Truman) spent very little time in debating whether the A-bomb would be used. The major emphasis was on the timing, reliability, explosive yield, and optimum deliverability of these new weapons. As historical fact would have it, the bomb was not ready "in time" for used against Germany. By the time of the post-VE day conference in Potsdam (where Allied representatives gathered to redraw the maps of Europe), Truman brought the ability to use nuclear force with him to the negotiation table. Upon hearing the news of this American capability, Stalin told Truman that he sincerely hope this new weapon would be used against Japan (Wainstock, 1996; Walker, 1997).
In the three and a half years since Pearl Harbor, there had been 900,000 American casualties (dead, wounded, or missing in action). Japan would indeed now be forced to surrender by all means possible. In particular, the Enola Gay, a specially modified B-29 was already rehearsing maneuvers to drop the first ever Atomic bomb. On July 25, Truman signed temporary control over the use of atomic weaponry to the military. The next day he issued a final radio ultimatum (called the Potsdam Declaration) -a copy of which was also dropped over Japan in the form of leaflets from American planes. Its carefully worded diplomatic language was met (two days later) with an historically significant proud refusal by the Japanese military and governmental leaders.
The cleverly worded Japanese refusal put a quintessentially Eastern concept Maku Satsu (literally meaning "to kill with silence") into the American lexicon. The use of that term was completely ambiguous to the Western mind. Depending on how it was interpreted, the Japanese were either rejecting the declaration out of hand or were indicating that they needed a concession for the continuance of the Imperial tradition. Within the context of total war, however, the actual intent did not matter because from the American point of view absolutely no concessions were acceptable. The situation was one of a fundamentally insoluble difference between, on the one hand, the U.S. modernized industrial military administration (using their provisionally provided technological prowess to defend democracy against the spread of fascist "evil") and, on the other, the infinitely older (and ideologically fanatical) Japanese warrior tradition (which viewed its role as one of saving face for their Emperor godhead).
On August 6, 1945, the Uranium bomb called Little Boy was dropped from the Enola Gay on the Japanese city of Hiroshima at 8:15 am local time. The resulting air burst immediately killed more than 80,000 men, women, and children. Tens of thousands more people would then die from radiation sickness or radiation induced complications in the days and years to come (Linenthal & Engelhardt, 1996; Harwit, 1996). Following the atomic incineration of Hiroshima, a further ultimatum (in no uncertain terms) was delivered by Truman. It included the following wording: "The force for which the sun draws its power has been loosed against those who brought war to the far east....If they do not now accept [the Potsdam Declaration] they may expect a reign of ruin from the air the like of which has never been seen on this earth" (Truman, In Walker, 1997). But still there was no word of surrender. Three days after the first bomb, the Soviet Union (in fulfillment of the Potsdam agreement) declared war on Japan; attacking two of their outlying islands (Wainstock, 1996).
Further delay on the part of the Supreme War Council of Japan meant that on August 9th, another B29 called Bock's Car took off from Tinian island on a mission to incinerate the city of Kokura (previously sparred bombing in order to better assess A-bomb effects). Inclement cloud cover, however, meant that a secondary target, Nagasaki, was selected in flight as the city where the second bomb Fat Man (an implosion device) was actually dropped. The ground burst of Fat Man, at 11:02 a.m., yielding a 20 kiloton explosive force. In one-tenth of one second another 40,000 people were killed and other deaths followed.
The next day, Truman retrieved the Executive authority for use of the bomb back from the U.S. military and waited for a reply from Japan (Williams & Cantelon, 1984; Jones, 1985; Walker, 1997). Only upon assessing the catastrophic damage caused by the two A-bombs did the Japanese Emperor (and his military) accept unequivocal capitulation to the Allied forces. On August, 14th, V-J day, Japan unofficially surrendered and on September 2, the Official Japanese surrender took place aboard the battleship Missouri in Tokyo Bay. During this auspicious moment, General MacArthur signed for the Allied forces. World War II was over and the millions of U.S. service personnel already returning from Europe would soon be joined by those from the Pacific theater of war (see Fennell, 1996).
MacArthur, Truman, and the G.I.s
As Supreme Commander of the American Occupying Forces (between 1945-1950), MacArthur held more absolute fire power at his disposal than any previous Japanese Emperor and came to personify (for both peoples) the ongoing American dominance over that country. A dramatic indication of that firepower was held in Operation Crossroads, eleven months after the Hiroshima blast. As part of the benevolent democratization of Japan, Emperor Hiro Hito was eventually required to make a radio address renouncing his deity status but remained in place as a cultural figurehead. Similarly, MacArthur would also eventually fall from grace (at least in the minds of military historians).
In contrast, Truman's happenstance presidency and unexpected re-election after the war nicely exemplifies the transformative potential of American society during the first half of the century. The fortuitous rise of returning G.I.s into post-war middle class America would (for the most part) would exemplify the same over the next quarter century. As indicated in the next chapter, these two historical quirks were actually related because Truman's immediate post-war political strategy was to pass on his vision of progressive American society to the public through his "fair deal" domestic programs. This platform, however, would be somewhat tempered by both (1) the substantial context of the emerging Cold War (i.e., of ongoing limited military engagements); and (2) inertia caused by long-standing traditions of domestic racial inequality.
survived the Great Depression by way of a self-serving reliance upon the sociology
of management (i.e., selection of compliant workers), the function of vocational
testing in the context of war industry work shifted toward a selection of available
workers for training emphasis. This was certainly one progressive disciplinary
outcome of the W.W.II application of vocational testing technologies. It was also,
however, very short-lived.
A second progressive trend can be noted in W.W.II testing technologies used for the "classification" of American inductees. That is, while initial testing utilize general military test batteries for classification (yielding single or multiple scores), the emphasis then shifted toward predictive validity of patterns of test scores --with these patterns having been obtained with reference to success in training or in combat. Again, the trend would not be brought forward constantly into the next era of testing.
must also be pointed out that the overall selection emphasis, conservative assumptions,
and self-serving subdisciplinary motives of the testing tradition diverged considerably
from the guidance and universality aspects of Federal New Deal Youth Programs
(including their topical extension during wartime). Depression era youth programs
(including the CCC and NYA) --which attained a near universal access status and
provided ameliorative educational programs-- and the so-called "Eight-Year
Study" --where college entrance exams for selected schools were waived between
1933-1941 (thereby providing an argument for increased college access to those
who may not have passed standardized entrance examinations), are early exemplars
of this egalitarian trend. Similarly, both the W.W.II era Armed Forces University
(which provided educational opportunities for personal advancement of military
personnel during the war) and the GI Bill of Rights (designed to give returning
veterans a leg up into the middle class) were extensions of this egalitarian trend.
we will see in the next chapter, expansion of the selection emphasis in 1950s
era testing --under the newly founded Educational Testing Service (ETS)-- took
place in a decidedly elitist manner. Both the non-elitist emphasis and the societal
implications of Depression era and W.W.II testing efforts were lost amongst the
wider Cold War opportunities for self-promotion and statistical refinement of
the testing industry.