From The Ground Up

The Publication of Archaeological Projects

a user needs survey

 


3 Survey methodology

3.1 Overall survey methodology

A combination of methods have been used in this survey. The main body of data was generated using a mail questionnaire, constructed and circulated to provide information about the behaviour and needs of a large, representative sample of individuals. This was followed up by a series of face-to-face interviews with a much smaller sample, to examine a spectrum of user perceptions and expectations in more detail. These primary methods were supplemented by independent sources, such as data from libraries and citation indexes. Information was also gathered from individuals and organisations directly involved in the production of fieldwork publications, including editors, publishers, and national archaeological agencies/bodies with specific publication policies. Documentary sources played an important part. Aside from providing data about the history of policy debates, the historical review offered in Section 2 informed the questionnaire and interview design.

Information produced by each of the survey methods has been analysed and used in a manner appropriate both to the type of data generated, and the nature of the sample from which it is derived. Each method is individually discussed below (for further discussion see May 1993).

The methods were selected to be complementary, and to provide an overview of the ways in which fieldwork publications are being used and what people expect from them. Certain methods are more appropriate to some objectives than to others. Of the two primary survey methods, the questionnaire was primarily designed to address objectives 1-4 concerning actual use of fieldwork publications. Attitudinal questions were also included, but interviewing is a more sophisticated way to gain attitudinal data and consequently objectives 4-7 were partly addressed in interview. Considerable attention was given to the appropriate application of specific methods. Table 3.1 indicates which methods have been used to address each of the project's objectives.

Table 3.1: A summary of survey methods cross-referenced with objectives

Objective Research question(s) Data source(s)/ method(s) Justification/outcome
  1. To identify the different sources used in acquiring information about archaeological projects
  • How do people find out about fieldwork projects?
  • mail questionnaire
  • questionnaire provides data on sources of information used and the value placed on these sources
  1. To discover the frequency and purpose of use of project publications (and their archives) in contrast to other types of publication
  • How often do people use fieldwork publications and archives?
  • What are the main purposes of use?
  • How does frequency of use of fieldwork publications compare with use of other types of archaeological publication?
  • mail questionnaire

Also:

  • phase I interviews for some qualification on complexity of use
  • data on library use
  • sales figures from publishers
  • citation analysis
  • Hedley Swain survey on archives
  • the questionnaire provides quantitative data on use (and reasons for use) as perceived by respondents
  • phase I interviews provide important qualitative information on use to qualify general statistical trends revealed by the questionnaire
  • records of library use, sales figures and citation indices provide independent data on frequency of use (n.b. this data was not directly comparable with questionnaire and interview data)
  • the Hedley Swain archive survey provides access to independent records from museums on use of archives (n.b. this data was not directly comparable with questionnaire and interview data)
  1. To assess the use of, and need for, the various components, which typically make up project publications (e.g. the general interpretation/synthesis; structural report; artefact and environmental reports)
  • How often do people need to use the various typical components which make up fieldwork publications?
  • How much importance/value do people place on specific components?
  • mail questionnaire

Also:

  • phase I interviews for some qualification on complexity of use
  • the questionnaire provides quantitative information on frequency of use of typical components
  • phase I interviews provide qualitative information on use to qualify general statistical trends revealed by the questionnaire. They also provide more detailed information about the values/importance placed on typical components
  1. To assess user needs and expectations with regard to the overall content and form of project publications
  • What do people need and expect from fieldwork publications in respect to the overall content and form?
  • mail questionnaire
  • phase I interviews

Also:

  • data from the Society of Antiquaries survey
  • the mail questionnaire provides quantitative information about broad attitudes and expectation
  • phase I interviews enable a more in-depth examination of the subtlety and complexity of people's attitudes and expectations.
  • the Society of Antiquaries survey provides detailed information about attitudes to publication policy (n.b. these data are not directly comparable with those derived from the questionnaire and interviews)
  1. To assess user expectations with regard to the style and presentation of project publications
  • What do people expect from the narrative style and presentation of fieldwork publication?
  • What changes would people like to see, if any?
  • mail questionnaire
  • phase I interviews
  • the mail questionnaire provides quantitative information about attitudes towards various styles of publication within the archaeological community
  • phase I interviews provide some further qualitative information for elaboration
  1. To assess user expectations with regard to the role of project publications in the production of archaeological knowledge
  • How do people perceive the role of fieldwork publications in their own research and/or that of others?
  • phase I interviews
  • phase II interviews (for the 'official' views)

Also:

  • mail questionnaire (section 4 may well provide some data on broad trends)
  • data from the Society of Antiquaries' survey
  • analysis of the published literature concerning fieldwork publication within the British Isles
  • phase I interviews provide information about how people within the archaeological community perceive the relationship between fieldwork publications and further research
  • phase II interviews provide qualitative information about how individuals involved in formulating publication policy perceive the relationship.
  • the questionnaire provides a crude measurement of people's perceptions of the effectiveness of the relationship between fieldwork publications and broader syntheses
  • responses to the Society of Antiquaries' survey provide further elaboration on peoples views
  • analysis of the history of debates concerning fieldwork publications provides a further source of information.
  1. To assess user expectations with regard to the media of publication, as well as access to, and use of, microfiche and electronic forms
  • What access do people have to different publication media and what expectations do they have regarding media of publication?
  • mail questionnaire

Also:

  • phase I interviews (for some further information on expectations)
  • Hit figures for specific internet publications
  • the questionnaire provides quantitative information on access to, and use of, various publication media.
  • phase I interviews provide further information on attitudes to publication media
  • hit figures provide further information about intensity of use although these will not be directly comparable with the questionnaire data
8. To compare user needs and expectations with current publication practice and rationale throughout the British Isles
  • Does current publication practice and rationale meet people's needs and expectations? If not, how do they differ?
  • mail questionnaire
  • phase I interviews
  • phase II interviews
  • study of literature and policy documents regarding fieldwork publications
  • a comparison of the results of the mail questionnaire and phase I interviews with current publication policy and practice as revealed from results of phase II interviews and analysis of current policy documents
9. To identify variation in respect to all of the above areas between different sections of the archaeological community and different regions/countries
  • Do user needs and expectations of fieldwork publications vary between different sections of the archaeological community and different regions/countries?
  • mail questionnaire
  • phase I interviews
  • phase II interviews
  • study of literature and policy documents regarding publication fieldwork publications
  • where appropriate an analysis of all these elements of the survey in terms of region and archaeological constituency provides information about variation within the archaeological community and between regions/countries.

3.2 Defining the survey population

The survey population was delimited by the remit of the project, which concerns the archaeological community and related groups in Britain and Ireland. A range of individuals deemed likely to use archaeological project publications included both those involved in the discipline, and those working in cognate disciplines or professions. It should be noted that the definition of the survey population in these terms did not involve any a priori assumption that all members of the survey population would use fieldwork publications; that in itself was subject to investigation. Nevertheless, as discussed in Section 1, the lay audience in general was not included in the survey population. Whilst the lay readership is an important constituency – in some respects, arguably the most important of all – it was decided that the task of ascertaining its interests could not easily be assimilated to this study, and would best be served by a separate survey.

A further factor was the identification of germane sub-groups. One of the objectives was to ascertain whether individuals in different regions, countries or disciplinary sectors use publications in different ways. To assist this, five sub-populations were introduced: the Republic of Ireland; Northern Ireland; Scotland; Wales; and England. For this particular purpose, the Channel Islands and Isle of Man were incorporated into England, as they represent too small a group for purposes of quantitative analysis.

The identification of different disciplinary sectors was based on pre-existing recognised `constituencies'. Clearly, there is overlap between some of the constituencies, and some individuals will belong to more than one. These factors have been taken into account in both the application of the survey methods and the analysis of the results. Fourteen primary constituencies were identified:

Three secondary constituencies were identified, members of which might use project publications:

3.3 The mail questionnaire

3.3.1 Design of questions

The success of a survey hinges, among other things, on the design of its questionnaire, and great care was accordingly devoted to this stage of the project. First, it was necessary to refine the aims and objectives (see Table 3.1 above); to review literature on the history of fieldwork publication and the evolution of policy; to evaluate earlier surveys; and to define the survey population (see Section 2 and Section 3.2). Questions were then designed which drew upon this contextual material, with specific attention to how the results would be interpreted. Interpretation required consideration of database design and data entry (see Section 3.3.5) as well as quantitative analysis and possible modes of interpretation and explanation (Section 3.3.6). Given the criticality of the questionnaire, a consultant specialising in survey design was contracted to oversee these phases.

Three types of questions were used in the mail questionnaire:

A mix of closed and open responses was used for all three types of question. Closed questions involve fixed response categories which the respondent must tick or ring (see, for example, question 1.2). They have the advantage of making the questionnaire easier and less time-consuming for the respondent, and they produce standardised responses for data entry, analysis and interpretation. However, they do constrain respondents by providing only a limited set of responses from which to choose and thus provide a rather crude picture. Open questions, where the respondents answer in their own words, enable greater expression of subtlety and complexity of opinion, but they also take more time, elicit a lower response rate, and are difficult to code for comparison. For these reasons a mixture of open and closed questions was employed, and open questions were often used to follow up closed ones. The questions and response categories were designed to provide two types of measurement: nominal (ie those identified by named categories) and ordinal (those which rank differences in reply, for instance on a scale between `useful' and `not useful').

The questions were grouped into sections on the basis of specific areas of interest identified in the objectives:

During question design, much thought was given to issues upon which not all members of the survey population would necessarily possess the knowledge to give answers (eg in relation to new electronic media). Hypothetical questions (for instance about future solutions to problems) were as far as possible avoided, as such questions tend to elicit hypothetical or negative answers that are difficult to interpret. Appendix 3.1 contains a copy of the final questionnaire.

3.3.2 Pilot survey

A pilot study was undertaken, involving a pre-test of a draft of the questionnaire which was sent to a small sample of individuals across a range of constituencies. For the most part, the pre-test questionnaire was completed and returned on an individual basis, but one focus group was also held with a group of respondents. The pilot study aimed to establish:

Fifty-three pilot questionnaires were returned. The exercise was particularly useful in highlighting ambiguous terminology and questions and indicating potential problems with closed questions. Appendix 3.2 contains a synthesis of the results of the pilot study, which resulted in considerable modification and improvement of the questionnaire.

3.3.3 Sampling methodology

A questionnaire survey is a method of gathering information from a number of individuals, a `sample', in order to learn something about the population from which these individuals are drawn (May 1993, 65). Indeed, it is generally accepted that a more accurate picture will be gained from surveying a known and representative proportion of the survey population, using various techniques to ensure a good response rate, than to undertake a universal survey with all the problems of tracking responses and low response rates (see Fowler 1984, 48-52, 59). The way in which a sample is constructed is therefore an essential part of questionnaire design. The ideal prototype in population sampling is the simple random method where a comprehensive survey frame is randomly sampled as if drawing the names from a hat. Such an approach is, however, rarely practicable in reality; indeed, the aim in most surveys is not to produce an entirely random sample, but to provide a means of structuring the sample in a systematic way to ensure good coverage of certain sub-groups within the survey population and good geographical coverage (see Bourque & Fielder 1995, 140-141; Fowler 1984, 22-25; May 1993, 70). In this project a stratified, systematic sampling technique was used. That is, the total population was divided into the constituencies defined above and individuals were then selected from each constituency on the basis of a sampling interval (ie every nth case). Within each constituency the list of individuals was also stratified (ordered) on the basis of the regions defined above so that a region could not be poorly selected merely by chance. In order for comparisons between the behaviour and attitudes of different constituencies to be statistically valid the aim was to gain at least 60 replies from each constituency. With a predicted response rate of c 30% it was necessary to sample c 200 individuals from each constituency. As the size of the archaeological constituencies defined above varies, the total number of the sample frame for each constituency was divided by the required sample size to provide an appropriate sampling interval. The result is that different intervals were applied to different constituencies, but this approach resulted in the systematic coverage necessary for comparison between constituencies (for full details see Appendix 3.3).

The production of a sample frame (the list of individuals from which the sample is drawn) was an important part of the process. Very few surveys are based on entirely comprehensive sample frames (a list of the total population concerned) (Fowler 1984, 19), but there are a number of specific problems in the case of the archaeological community which merit comment. There is no central register or list of all individuals active in archaeology which encompasses all the constituencies outlined above. Various lists can be drawn from society membership lists and institutional employee records and then amalgamated, but these themselves contain inherent biases. Furthermore, certain constituencies contain a large number of self-employed or highly mobile people who are difficult to locate. Nevertheless, there have been a number of recent attempts to generate more comprehensive lists of individuals within the discipline. The sampling frame used here was the combined product of the latest Council for British Archaeology Yearbook (Heyworth 1995) and the list generated by the recent Profiling the Profession survey (Aitchison 1999). These were cross-checked against specific institutional lists, which provided information regarding curators, contractors, museum archaeologists, and university/college staff. Sampling frames for other constituencies could only be constructed by contacting a representative sample of organisations for lists of employees and members. This introduced a multi-stage sampling process: that is, first, organisations within a constituency were sampled using a stratified procedure to ensure good regional coverage, and then the lists of individuals they provided were sampled using a sampling interval. Details of the construction of the sampling frame and the sampling intervals used are given in Appendix 3.3.

3.3.4 Reminders and response rate

It is important when using a sampling procedure in a questionnaire survey to ensure a good response rate. A low response rate (< 20%) has been shown to be a significant source of bias in many surveys, as those who respond most readily often represent the practices and views of certain sectors of the wider population with particular interests and agendas. Consequently, most questionnaires are followed up by one or more reminder cards which significantly increase the response rate. In this case a reminder card was sent to all those who had not returned the questionnaire by the deadline. This yielded significant additional returns (see Section 4 for analysis of the response rate).

3.3.5 Database design

The PUNS database was constructed using Microsoft Access 2.0 on a PC/Windows 95 Platform. The use of a relational database allowed the 250 or so fields required for each response to be spread across several tables, each record being related via a unique code for each response. Thus each returned questionnaire was represented as seven records across seven tables. In addition this allowed `many to many' relationships to be expressed efficiently, for example, where many respondents were members of the same societies. The database itself consisted of several Access database files, one containing the base tables mentioned above and the others designed specifically for data entry or analysis, accessing these tables by attachment. This minimised the risk of altering the data in error during analysis and allowed for easier management of the large number of queries and forms generated. Not all the fields could be accessed simultaneously as an Access 2.0 select query will select from a maximum of 250 fields; however, this was just enough to allow data entry to be carried out via a single form based on one large query.

All data entered on the questionnaires were entered. Comments not relating directly to the questions or written as margin notes in the questionnaire were also included in the digital record, giving as complete a digital record as possible. Fields were available to indicate where answers to a particular question were ambiguous, for example where ticks had been used where ranking by number had been requested. This allowed ambiguous answers to be removed prior to the analysis of the response to a question as the actual analysis was carried out on select queries from the original tables. The final stage of analysis consisted of many crosstab queries being exported to Excel where the results could be easily handled and given the correct format for presentation, normally as percentages.

3.3.6 Analysis

Analysis focused on three main areas:

The results have been presented in tabular and graphic form in conjunction with a written account of the results (see Section 4). Interpretation of these results can be found in Section 6 and focuses on:

Analysis and interpretation of the questionnaire and interview data have been integrated in Section 4 and Section 5, to allow a more subtle evaluation of the complexity of people's use of project publications and its relationship to their opinions/attitudes.

3.4 Phase I interviews

The mail questionnaire produced data for quantitative analysis of general trends in the actual use of fieldwork publications and expectations of them. It also provided some measurement of the climate of opinion regarding the principles behind fieldwork publishing (as addressed in section 4 of the questionnaire). However, there are limits to what a mail questionnaire will reveal about values, opinions and attitudes (see May 1993, 104-7), and for this reason face-to-face interviews are often used in conjunction with questionnaires.

A small sample of such interviews (40) was undertaken to examine the values and expectations which inform attitudes. The reasoning behind this aspect of the survey was that, although attitudes towards fieldwork publications are likely to be influenced by individual patterns of behaviour and need (as examined in the questionnaire), they are also likely to be rooted in broader systems of value and assumptions about correct practice. Such systems are important, in that they are likely to influence reactions to or the implementation of changes in policy.

3.4.1 Sampling for interview

In contrast to the mail questionnaire, the interviews were not intended to provide a sample which was in some way representative of the broader survey population. Rather, the aim was to gain a deeper understanding of the particular sets of values and expectations surrounding fieldwork publication that had been identified through documentary research and the mail questionnaire. Consequently, selection for interview was determined by the specific attitudes, opinions and expectations that we wished to examine in further detail. For this purpose, four main groups were defined on the basis of the range of positions adopted with regard to the form and content of fieldwork publications, namely those who favour:

i) Full, but concisely written, description of the evidence with minimal analysis and interpretation within the fieldwork publication itself.
ii) Full, but concisely written, description of the evidence in conjunction with thorough analysis and interpretation at the level of the individual project.
iii) Selective publication (with less description of evidence) in conjunction with greater emphasis on analysis and interpretation, and more effort directed towards synthesis beyond the level of individual projects.
iv) Summary publication backed up by archiving.

Cross-cutting these groups are those who favour a range of approaches, policy in the given case being determined by the significance of the project. Such respondents nevertheless usually lean towards one of the four groups and can be so classified. The four groups obviously represent lengths along a continuum, and since there may be different reasons for arriving at one position, there is likely to be variation within them. Nevertheless, they offered the best available means for the differentiation of attitudes to select respondents for interview.

Hence, the primary unit of selection was not the individual, but the milieu of values and attitudes which the individual represented. The sample was drawn from questionnaire respondents who were provisionally classified into attitude groups on the basis of their response to certain questions in section 4 of the questionnaire (4.2, 4.3, 4.5). Use of the questionnaire data for the purposes of selection also enabled comparison between individual patterns of behaviour and need (as revealed by the questionnaire) and attitudes towards the principles underlying publication (as exposed in the interviews).

In practice, selection was based on a target quota scheme which aimed to interview 8-10 people in each group. The target scheme also aimed to include an even spread in terms of age, gender, constituency and region, as these factors are likely to impact on people's attitudes. An outline of the target quota and interview sample can be found in Appendix 3.4.

3.4.2 Design of interview questions

Interviews were based on a semi-structured design. Questions were standardised for the purposes of comparison, but at the same time the interviewer could probe beyond the initial response to gain clarification and elaboration. The content and structure of the interviews were based on four main questions which were primarily concerned with fulfilling objectives 4-6 in the project design:

i) What modes of practice govern the use of fieldwork publications and how do these relate to the research environment of the discipline?
ii) What principles and assumptions govern attitudes towards the publication of fieldwork projects, and expectations of them?
iii) To what extent is there a desire for change in the nature and form of fieldwork publications, and whence does desire for or resistance to change stem?
iv) How do the values and expectations which surround fieldwork publications relate to interest groups within the discipline?

Each of these questions above was broken down into subsidiary questions (i, ii, iii, etc), which were in turn translated into specific interview topics/questions (see Appendix 3.5 for an outline of the questions). A summary of the objectives and the main research questions were sent to interviewees in advance.

3.4.3 Analysis

The interviews were taped and transcribed. A summary of each interview was then produced. The comments on each question were next collated, the comments being annotated with the interviewees' attitude groups as defined above (Section 3.4.1). From this information it was possible to determine whether there was a correlation between attitude group and response.

3.5 Phase II Interviews

Phase II interviews were intended to elicit comment from a representative of each of the funding bodies in advance of completion of the first draft of the report. The main objective was to ensure that any regional idiosyncrasies had been covered in the analysis of the data. The Phase II interviews were carried out face-to-face, the interviewee having previously received a summary of the main results of the survey.

In addition to the face-to-face interviews, comments were invited from six individuals with a particular interest in publication policy.

3.6 Citation analysis

Citation analysis was carried out on fourteen monographs chosen to give a spread of regions and periods (see Appendix 5.2). The analysis was carried out on the Institute for Scientific Information (ISI) database, within which the Arts and Humanities Citation Index and the Social Sciences Citation Index were used (the Science Citation Index did not hold any relevant information). Analysis was carried out on all years post-publication. The aim was to gain a general impression of the numbers of citations for different types of monographs, ranging from regionally- or period-specific texts to those with a subject-specialist interest (such as human bones or artefacts) and others of more general interest.

3.7 Library consultation

Thirty libraries of universities teaching archaeology were asked which of the fourteen monographs (see Section 3.6) they held, and how often these had been borrowed over the last five years. The same libraries were asked about their policy on the purchase of archaeological monographs.

While it was recognised that information about borrowing would not extend to consultation, and so give only an incomplete picture of use, the main aim was to obtain a broad impression of the availability of archaeological monographs in university libraries.

3.8 Consultation with editors and publishers

Telephone interviews were carried out with ten people involved with the publishing and editing of archaeological project material, and representing journal and monograph publishing. Two of those interviewed were particularly interested in Internet publication. The interviews were structured to elicit comment on the way in which archaeological projects are currently published and on the direction which this might take in the future. Interviewees were also asked to comment on more general aspects, such as problems they encountered in bringing projects to publication. The questions are summarised in Appendix 3.6.


Next section

PUNS report index page