Page images
PDF
EPUB

ADVICE FROM A COMPUTER

"Take a hardware merchant thinking of opening a store in the bank's neighborhood," says John McCleary, an IBM specialist in the use of computers in financial fields. "He'll ask the bank if it thinks he can make a go of it. The computer will help in providing the answer."

The Commerce Department and the Brookings Institution are both developing computer models of the U.S. economy. It's hoped that within a decade highly detailed models that shed new light on the workings of the economy will be available. Such models would show, among other things, how gross national product, personal income and employment would respond to a cut in Government spending,a rise in business plant and equipment outlays or an income tax reduction. Such foreknowledge would help Government economic officials make sound decisions.

Computers also are likely to find uses in tackling social problems, such as air and water pollution, inadequate mass transit and traffic congestion. The advantage of a computer here is that it can juggle many inter-related variables and evaluate the effect of various courses of action in a fraction of the time it would take human beings. For example, to help plan for road and transit needs, a computer could weigh such factors as present traffic patterns, the impact of future residential and business development on traffic, public preferences for private autos as against mass transit, the deterrent effect of tolls for road use and the effect of fare cuts on transit patronage.

Most of the chores performed by computers in the business world today are routine clerical assignments such as preparation of payrolls or customers' bills. "But the really significant use of the computer in these coming years will be in giving the head of the company a total picture in graphic form of what his company is doing right now," says Louis Rader, a GE vice president.

HELP FOR THE BOSS

Information about incoming orders, sales, inventories, expenses and production schedules will be fed into a central company computer from scattered offices. The essential data will be relayed from the computer to the display panel in the chief executive's office. "With up-to-date information he can make a quicker assessment of the situation," says Mr. Rader. "It will cut out waste."

Many of the decisions that middle-management men now make will be made almost automatically by a computer; for example, if a manufacturer's inventory of finished goods declines to a specified level, a computer will print out a production order. This prospect has led some people to conclude the middlemanagement level will just about disappear. But not everyone agrees. Jay W. Forrester, a professor at MIT's Alfred P. Sloan School of Management, asserts:

"The computer will change the nature of his work. Right now a great deal of the middle manager's work is routine and repetitive, the kind a computer can do. In the future, the middle manager will handle more creative tasks. Perhaps more of them will handle personnel problems, such as motivations of employes. Or more of them will be thinking of new ways of doing business." In the manufacturing process the computer's major triumphs to date have been in the petroleum chemical and metals industries. Production in these fields usually consists of a continuous flow of liquid, dry or molten substances easily handled automatically, and such processes are ideal for control by computers. Instruments monitor flow, temperature and other variables and flash data to the computer, which is programed to decide when control adjustments are necessary and to send out signals making required changes in settings.

OUTLOOK FOR FACTORY AUTOMATION

Nobody really knows if the manufacture of automobiles, tires or rocking chairs can be programed into a computer to the extent that human hands become insignficant. But computer technologists say factory automation definitely will make important advances in the 1970s. In particular, they say that computers will take over the control of materials handling conveyors, drilling machines and machines for testing the end product.

An IBM plant in Endicott, N.Y., that is run almost entirely by a computer demonstrates how automation can cut costs and raise productivity. The plant makes electronic circuit cards that perform the logical and arithmetical operations in computers-so, in effect, the computer helps make other computers. The IBM plant produces many types and sizes of circuit cards. The computer sees that conveyors get each card to the right machine at the right time and that the machine performs the right operation on it. It controls the drilling machines that make holes in the cards, the testing machines that insure the holes are in the proper place and the insertion machines that place small components in the holes.

IBM says the automated assembly lines reduce scrappage and improve quality. The company estimates that computer control of the Endicott plant permits production of circuit cards at half what they would cost with conventional hand operations. Moreover, says IBM, the automated system enables it to respond to market demand for different types of circuits two to three times faster than would be possible otherwise; with computer control there is no need to shut down production and shift personnel about when changing the product mix.

IMPACT ON EMPLOYMENT

The automated circuit card plant employs only a fraction of the production workers that would be needed without automation. The sight of production lines without people is another of the considerations that sometimes give rise to ambivalent feelings about computers. Some observers, particularly in union circles, fear widespread unemployment will result inevitably from increased use of computers in industry.

But others say there can be no clear-cut answer at this point. Logically, it would seem that if industry comes to rely almost totally on computers to guide production operations, there simply would not be enough jobs to go around, unless the work week were drastically reduced.

Up till now, however, workers displaced by automation have generally been absorbed by the expanding economy, and some economists think this will continue to be the case for the foreseeable future. Computer makers themselves note that their industry has created some 250,000 new jobs and that the total will grow.

Even in some fields where controversy over the introduction of computers would seem unlikely, there are those who doubt the amazing machines will be an unmixed blessing. Some doctors and hospital officials, for example, indicate they might not be willing to hand over patient records to computers to which others would have access; making such information freely available, they fear, might lead to a rise in malpractice suits.

Some observers maintain that computer networks set up by banks or by timesharing data-processing centers also have their alarming aspects. What would happen, they ask, if a computer linking thousands of users were programed incorrectly? Most likely, a monumental snarl would ensue. Bills would be deducted from the wrong bank accounts. The boss's paycheck would be credited to the office boy. The solution to a stress problem posed by an engineer would clack out on the doctor's teleprinter.

THE EDITOR,

SAN FRANCISCO STATE COLLEGE,
SCHOOL OF BEHAVIORAL AND SOCIAL SCIENCES,

THE AMERICAN SOCIOLOGIST.

DEPARTMENT OF SOCIOLOGY,
San Francisco, Calif., August 7, 1966.

TO THE EDITOR: It has come to my attention that the U.S. Bureau of the Budget proposes "... the establishment of a huge centralized computer into which all the data on any American now collected by some 20 separate federal agencies would be fed. It could mean an instant check on any man's birth, school grades, military or criminal record, employment, income, credit rating and even personality traits." (San Francisco Examiner, July 31, 1966) I have no way to know if this report is entirely accurate but anyone who has ever held a "secret" or higher security clearance has some idea of the range of data collected by only

one agency. The data which could be centralized by combining the records from a number of agencies is truly fantastic.

I have talked with a number of people who favor this proposal. They argue that it would eliminate duplication, that it would make police and security work more effective, that it would make running away from obligations such as alimony payments more difficult. These arguments are quite true as anyone who has witnessed the speed and accuracy of California's stolen automobile computer system, or the ten-second personal warrant check available in some areas, can testify.

The first step in the process of setting up a National Identity File has already taken place with the establishment of the National Computer Center in Martinsburg, West Virginia. There taxpayers records, centralized on the basis of their Social Security numbers (which will probably be our new National Identity Numbers) are to be compared with the informational filings of their employers and their banks.

Some sociologists are probably intrigued by the possibility of new kinds of demographic studies, mobility studies, by the possibility of really scientific sampling, or by the sheer amount of raw knowledge obtainable from such a file. If it were available it would be a powerful research tool. I think it is a mistake to be swayed by such considerations.

To argue in favor of a National Identity and Data File requires the assumption that all future governments of this country in all political situations (including war hysteria and witch hunts), all federal agencies both public and secret, and all individuals who could gain access under the cloak of authority or by ruse, will be benevolently motivated. I do not think this assumption can be made by a reasonable man. The potential for evil, for official and unofficial blackmail, for the harassment of political minorities is virtually unlimited. One must realize that whatever safeguards may be proposed in the initial justification could later be removed by a powerful president or a stampeded congress. Also the safeguards probably would be circumvented on or off the record by our undercover agencies.

I see no reason to assume that the government will be any more resistant to the pressures of the moment in the future than it has been in the past. Sending Japenese-American Citizens to concentration camps would have been immensely speeded by having a National Identity and Data File, and McCarthy could have destroyed many more careers if he had computer records of security investigations. Protestors of current Viet Nam policy could easily be marked "politically unreliable" for shipment off to the Tulelake Relocation Center after we bomb China.

On a sociological level an ex-convict would carry the stigma throughout life. He could have a hard time starting anew if, when he is stopped for a traffic offense, the police learn that he is an ex-convict, possibly tell his employer, and from that time on consider him a "suspect" in every crime committed. It happens. An ex-mental patient, who, as Szaz argues, may have been hospitalized for a bad reason in the first place, may find this status coming back to haunt his career and the creditability of his assertions years later. A Bad Conduct Discharge, a record of a homosexual contact, of un-wed motherhood, and affair recorded in a security check, all would be available essentially forever to ruin lives, deny jobs, and make the individual an object of pernicious official attention. It is important to realize that there is no system of safeguards which will assure that the possibilities I have listed will not happen, and there is no safeguard which cannot be removed. I think that the American Sociological Association ought to discuss the issue of a National Identity and Data File at the earliest possible time and take an official stand opposing its establishment.

Sincerely,

H. TAYLOR BUCKNER, Assistant Professor of Sociology.

[From The Public Interest, Spring 1967]

DATA BANKS AND DOSSIERS

(By Carl Kaysen)

Last year, a government committee headed by Carl Kaysen proposed the creation of a "national data center." The intention is to improve the usefulness of available statistics for policy planning purposes by funneling such statistics into a central "information bank." But the proposal evoked considerable criticism as representing a possible threat to privacy and an undue concentration of power, in the form of knowledge, in governmental agencies. In this article, Mr Kaysen, who is Director of the Institute for Advanced Study, in Princeton, presents the case for a "data bank." We expect to continue the discussion of this matter in future issues.-Ed.

Both the intellectual development of economics and its practical success have depended greatly on the large body of statistical information, covering the whole range of economic activity, that is publicly available in modern, democratic states. Much of this material is the by-product of regulatory, administrative, and revenue-raising activities of government, and its public availability reflects the democratic ethos. In the United States there is also a central core of demographic, economic, and social information that is collecter, organized, and published by the Census Bureau in response to both governmental and public demands for information, rather than simply as the reflex of other governmental activities. Over time, and especially in the last three or four decades, there has been a continuing improvement in the coverage, consistency, and quality of these data. Such improvements have in great part resulted from the continuing efforts of social scientists and satisticians both within and outside the government. Without these improvements in the stock of basic quantitative information, our recent success in the application of sophisticated economic analyses to problems of public policy would have been impossible. Thus, the formation last year of a consulting committee composed largely of economists to report * to the Director of the Budget-himself an economist of distinction-on "Storage of and Access to Federal Statistical Data" was simply another natural step in a continuing process. The participants were moved by professional concern for the quality and usability of the enormous body of government data to take on what they thought to be a necessary, important, and totally unglamorous task. They certainly did not expect it to be controversial.

The central problem to which the group addressed itself was the consequences of the trend toward increasing decentralization in the Federal statistical system at a time when the demand for more and more detailed quantitative information was growing rapidly. Currently, twenty-one agencies of government have significant statistical programs. The largest four of these-the Census, the Bureau of Labor Statistics, the Statistical Reporting Service, and the Economic Research Service of the Department of Agriculture-account for about 60 percent of a total Federal statistical budget of nearly $125 millions. A decade ago, the largest four agencies accounted for 71 percent of a much smaller budget. By 1970, the total statistical budget of the Federal Government will probably exceed $200 millions and, in the absence of deliberate countervailing effort, decentralization will have further increased. Yet, it has already been clear for some time that the Federal statistical system was too decentralized to function effectively and efficiently.

THE DRAMA BEGINS

Such is the background of the report which recommended the creation of a National Data Center. Here, Congressman Cornelius Gallagher (D., 13th District, N.J.) entered the scene, with a different set of concerns and objectives. He was Chairman of a Special Subcommittee on Invasion of Privacy, of the Government Operations Committee of the House, which held hearings on the proposed data center and related topics in the summer of 1966. To some extent the hear

*The full title of the Report, dated October, 1966, is: Report of the Task Force on the Storage of and Access to Government Statistics, and it is available from the Bureau of the Budget. The Committee which produced it were: Carl Kaysen, Chairman, Institute for Advanced Study; Charles C. Holt, University of Wisconsin; Richard Holton, University of California, Berkeley; George Kozmetsky, University of Texas; H. Russell Morrison, Standard Statistics Co.; Richard Ruggles, Yale University.

ings themselves, and to a much greater extent their refraction in the press, pictured the proposed Data Center as at least a grave threat to personal privacy and at worst a precursor to a computer-managed totalitarian state. Congressman Gallagher himself saw the proposal as one more dreary instance of a group of technocrats ignoring human values in their pursuit of efficiency.

It now appears as if the public outcry which the Committee hearings stimulated and amplified has raised great difficulties in the way of the proposed National Data Center. To what extent are they genuine? To what extent are they unavoidable? Are they of such a magnitude as to outweigh the probable benefits of the Center?

In answering these questions, it appears simplest to begin with a further examination of the proposal itself. The inadequacies arising from our overdecentralized statistical system were recognized two decades ago; since then they have increased. The present system corresponds to an obsolete technology, under which publication was the only practical means of making information available for use. Publication, in turn, involved summarization, and what was published was almost always a summary of the more basic information available to the fact-gathering agency. In part, this reflected necessary and appropriate legal and customary restrictions on the Federal Government's publication of data on individuals or on single business enterprises. In part, it reflected the more fundamental fact that it was difficult or impossible to make use of a vast body of information unless it was presented in some summary form.

Any summarization or tabulation, however, loses some of the detail of the underlying data, and once a summary is published, retabulation of the original data becomes difficult and expensive. Because of the high degree of decentralization of the statistical system, it is frequently the case that information on related aspects of the same unit is collected by different agencies, tabulated and summarized on bases that are different and inconsistent, with a resultant loss of information originally available, and a serious degradation of the quality of analyses using the information. The split, on the one hand, between information on balance sheets and income statements, as collected by the Internal Revenue Service, and, on the other hand, the information on value of economic inputs and outputs as collected by the Census, is one example of this situation.

The result of all this is the substitution of worse for better information, less for more refined analysis, and the expenditure of much ingenuity and labor on the construction of rough estimates of magnitudes that could be precisely determined if all the information underlying summary tabulations were available for use. This, in turn, limits the precision of both the policy process, and our ability to understand, criticize and modify it.

These effects of the inability of the present system to use fully the micro-information fed into it are growing more and more important. The differentiation of the Federal policy process is increasing, and almost certainly will continue to do so. Simple policy measures whose effectiveness could be judged in terms of some overall aggregate or average response for the nation are increasingly giving way to more subtle ones, in which the effects on particular geographic areas, income groups, or social groups become of major interest. The present decentralized system is simply incapable of meeting these needs.

It is becoming increasingly difficult to make informed and intelligent policy decisions on such questions in the area of poverty as welfare payments, family allowances, and the like, simply because we lack sufficient "dis-aggregated" information-breakdowns by the many relevant social and economic variablesthat is both wide in coverage and readily usable. The information the Government does have is scattered among a dozen agencies, collected on a variety of not necessarily consistent bases, and not really accessible to any single group of policy-makers or research analysts. A test of the proposition, for example, that poor performance in school and poor prospects of social mobility are directly related to family size would require data combining information on at least family size and composition, family income, regional location, city size, school performance, and postschool occupational history over a period of years in a way that is simply not now possible, even though the separate items of information were all fed into some part of the Federal statistical system at some time.

A secondary, but not unimportant gain from the creation of the data center, is in simple efficiency. At present, some of the individual data-collecting agencies operate at too small a scale to make full use of the resources of modern information-handling techniques. The use of a central storage and processing agencywhile maintaining decentralized collection, analysis, and publication to whatever extent was desirable-would permit significant economies. As the Federal statistical budget climbs toward $200 million annually, this is not a trivial point. Even

« PreviousContinue »