Competency G
Demonstrate understanding of basic principles and standards involved in organizing information such as classification and controlled vocabulary systems, cataloging systems, metadata schemas or other systems for making information accessible to a particular clientele.
Competency Definition
To serve their patrons and clients effectively, information professionals need to understand how information is organized across a variety of systems that use a variety of methods for indexing, storage, and retrieval of records and the documents they represent. The very action of organization promotes the ability to location information from the compilation and maintenance of human readable and retrievable records (Hall-Ellis, 2015). As new classification schemes, metadata schemas, and other systems are developed and tailored to specific contexts and user groups, information professionals need to be familiar with basic and advanced cataloging principles and standards (e.g., Library of Congress, Dewey Decimal). In addition, they need to know how to work with current and emerging tools and resources involved with organizing information and collaborate with other information professionals who engage with more forward-facing services such as reference and database navigation instruction towards the successful functioning of the whole information organization (Hall-Ellis, 2015). From the design of systems that hold information, the means of retrieval of information from those systems, to the ways how users interact with such systems, the ways in which how things were stored in IR systems determines how they are retrieved (Weedman, 2016a).
The information professional’s organization of records in IR systems focuses on the representation and structure of content rather than the content itself (Bates, 1999). The goal of creation and organization of bibliographic data is to establish and measure the relationships between documents in an IR system, ensuring that the description of the record, from data points such as names, titles, subjects, and keywords, provides appropriate representational and identifiable access points to the document that the document’s record represents (Weedman, 2016a; Hall-Ellis, 2015). The structure and organization of a document constitute as metadata information, which is used in describing the relationship between other similar documents in an information retrieval (IR) system (Bates, 1999). This metadata serves two basic purposes, the identification and retrieval of that document from a database (Weedman, 2016a). Some common types of metadata are descriptive, administrative, and structural. Descriptive metadata is data about the document itself, used to describe and identify for indexing and retrieval from a database. Administrative metadata are elements used to administer and manage a document, involving creation, rights management, access control, and preservation information. Structural metadata refers to the documents’ physical structure, involving the components that make up the document, such as digital book pagination and order. Yet metadata varies across databases and systems, dependent on how documents are treated and indexed and the rules and search fields required to retrieve those documents. For example, record retrieval based on an author’s alternative name is dependent on how the document was indexed in relation to a database’s authority record for a particular name, which is based on its authority control. Authority control is how a library catalog or database puts all the works of an author, editions, titles in a series, or subjects on a specific topic under one heading; some IR systems use authority control, while others do not. Information professionals then must be able to understand IR systems, in terms of their structure, navigation, and retrieval, based on bibliographic data and metadata, which is largely influenced by the design of the IR system itself.
Design refers to the entire process of conceptualization, implementation, evaluation, and change management of subsequent revisions involved in building a system, as well as the communication involved at each stage of the rationale and necessity behind each step (Weedman, 2016b). An IR system’s structure and design dictates how its information and records are organized. The nature of the IR system’s data and its organization, the availability of search fields and associated subfields, the indexing process and protocols, the retrieval mechanics, and the user interface (UI) play integral roles in how an IR system functions and can be accessed by various users. The UI in particular is essential in showing what search options are available to an IR system’s users, as well as the presentation of the output, the search results and the layout of individual document entries. The designers of such systems do not always think like the targeted user base of those systems, which means that user input is required at all stages of building and maintaining an IR system (Weedman, 2016b). Every design context is different, which makes the issues in the design of IR systems ill-defined, “wicked” problems that have no clear and clean solutions that address issues across all situations (Weedman, 2009, 2016b). Information professionals then must recognize how their users think and search, to make the retrieval process in line with their users’ information seeking behaviors and search strategies as they navigate through IR systems. They must also see that the process of design is a social one, full of uncertainties and roadblocks that are overcome through risk-taking and necessary compromises to maintain the integrity and management of the whole IR system (Weedman, 2009).
Information professionals need to develop and integrate knowledge of information organization principles in their workflows in order to maintain, add records to, and help users navigate through IR systems, and arguably do so being cognizant of the design and structure of those systems. This kind of systems and design level thinking also applies to other contexts that involve non-traditional IR systems, in terms of website design and navigation of human resource management (HRM) databases. I have engaged in this kind of thinking in a website analysis and proposal project for INFO 202: Information Retrieval System Design, and in previous work experience at the Office of the Registrar (OOTR) at a private arts university in the Bay Area.Â
Discussion of Competency Supporting Evidence
In the iSchool core course INFO 202: Information Retrieval System Design, students learn about the various ways in which information can be organized and retrieved, and the structural and design principles involved in creating such organizations of information. Emphasis was given to the concepts of metadata and web hierarchies, along with their associated IR principles and concepts in relation to the contextual information needs of specific user groups. In one particular group project, I worked with a partner to describe the design of a website and generated a redesign proposal to make that site more accessible to its intended users.
For this project, my partner and I decided to discuss the layout of the government website for Onslow County, North Carolina (OCNC) website [www.onslowcountync.gov]. We reasoned that when looking for information on a government website, a user expects the information to summarize what that website is covering in an easily accessible way. In the case of a county website, the focus then should be on what that county has to offer, to its residents and businesses, as well as users that are looking to visit an area within the county. We argued that OCNC’s website addresses these target groups in an adequate manner, yet the site’s design and layout can generate confusion due to age of the site, redundant links within the site, and an excessive amount of links that lead to external sites with no warning. By redesigning the site to address these issues, OCNC could increase the usability of their website for their intended audiences. [Important caveat: The website has changed since this project was completed in the Spring semester of 2016, when the website looked like this entry retrieved from the Internet Archive’s Wayback Machine.]Â
As discussed in our project, OCNC has a diverse population of over 185,000 residents, many of which comprise of military families due to the county’s five military bases; there are also tourists who come to see the county’s natural landscape, beaches, and events. At first glance, the overall presentation of the site was visually outdated (site was copyrighted 2011, indicating that years have passed since any noticeable site update) and information is presented in a plain, static format. The site as a whole has pages for all its 33 listed departments and their associated civic functions, though not all of them actually have a page presence on the site, but rather an external one (such as OCNC’s dedicated tourist websites that link from the OCNC government county site, which are also .gov sites). We discussed in detail the various categories and subcategories present under the pages on the home page’s menu bar, as seen in the following sitemap graphic of a sampling of OCNC pages to showcase the redundancies and the external links that occurred in the site.
OCNC Original Sitemap. (Click image to see full sized image.)
From our analysis, we concluded that the OCNC site’s layout had a few advantages, yet arguably more disadvantages in terms of content redundancy and a lack of transparency. Though the categories do promote straightforward navigation around the site, the content’s presentation is quite redundant. The granularity of site levels was inconsistent, which can leave users confused as to why certain pages fall under certain categories and other pages were weighted at the same level as others. For example, Parks and Recreation had its own information pages, but the actual park reservation online form is under expandable tabs on the department homepage as well as a link under E-Services. Another disconcerting issue in site navigation was the fact that many pages linked to external sites associated, but not directly connected to, OCNC’s sitemap. It was not apparent to the user which pages will link to external sites. Sometimes page links trigger unexpected PDF downloads. There is no indication on various pages on the site that a file will be downloaded when clicking on a particular link (e.g., Park Rules under Parks and Facilities will automatically download a PDF of said rules; this page also has downloadable PDF park maps).
The goal of quality web design is the presentation of relevant information in an accessible, sensible manner. To this end, we discussed ways in which the OCNC site could be redesigned that retained the strengths of existing categories. We proposed nesting appropriate pages under Residents, Visitors, and Businesses, and keeping the original dropdown menu for Departments. For example, the Library’s Home Page, Local Military, and Parks and Recreation would be nested under Residents. The Library Home Page would have links to Library Online (the library catalog portal), an About page, and “How Do I…”, with sub-pages such as Borrow Materials and Join the Friends of the Library. We also recommended notification pages that would appear when users would be redirected to external pages beyond the OCNC site. This would generate transparency in site navigation, as well as a show of courtesy to users navigating through its pages. The lack of indication whether links lead to external sites misleads users, creating confusion and possible frustration especially for less web-savvy users. See the sitemap graphic below for our proposed sitemap reorganization using the same sampled subset of site pages from the original sitemap graphic as shown above.
OCNC Proposed Sitemap. (Click image to see full sized image.)
We also discussed how focus groups and user experience testing would ensure that the proposed changes would help facilitate the typical site user’s navigation habits. Focus groups of current residents would help determine whether residents actually use the OCNC site and the reasons behind their use. The findings from these focus groups would work in conjunction with subsequent user experience testing, regarding how they would search for particular information throughout the site and how they would rate their site navigation experience. A card sorting task may also prove useful, to uncover how users categorize information on the existing site and name categories in relation to the category naming present on the site. The results could better inform the labeling of categories and links, as well as lead to a more aesthetically and conceptually consistent design throughout the site.
The experiences I gained working on this project helped reinforce the importance of design in web sites as well as other IR systems. Strong IR system design allows users to navigate through information freely, knowing how they reached one point to another from their initial search query to its resolution in finding the information that they intended to find. Site design ensures that information is where typical users would expect it in relation to the site’s other content. The recommendations my partner and I presented in our project here attempted to address the OCNC’s sitemap issues so that user experience and navigation of the site is more accessible, transparent, and reflective of the needs of its users, which is the goal of the design of any IR system or website.
Evidence 2: Database Entry and Retrieval NarrativeÂ
In my position as the transcript data entry specialist for the Office of the Registrar (OOTR) at a private arts university in the Bay Area, I interacted and managed transcript data entry into the human resource management (HRM) databases PeopleSoft and SalesForce. I worked alongside co-workers to evaluate incoming high school and post-secondary transcripts for completion and degree conferral per university admissions requirements, linking transcript data to student records in PeopleSoft and their digital scans in ImageNow (now known as Perceptive Content). I collaborated with the internal OOTR transcript collections team to troubleshoot transcripts with missing information, conducting ongoing background research on various state high school and homeschool policies in order to recommend a follow up with the school or the student depending on the circumstances. I also substituted for the front desk person during lunch rotation and shared phone answering responsibilities on top of my own work, responding to students and staff who had questions regarding OOTR services. [In regards to these particular duties, I discuss my customer service focus on clientele diversity in Competency C.]
In a typical week in my position, I analyzed 500+ high school and post-secondary transcripts, reviewing whether they meet admissions requirements and indicate degree conferral based on the student’s proposed academic career plan (undergraduate, graduate, certificate, personal enrichment/not-for-academic credit). I coded and linked transcripts to student records in PeopleSoft and ImageNow for subsequent processing needs. As seen in the picture below, I filled out the various PeopleSoft and ImageNow fields and made sure that the student’s information is consistent across databases, all linked by the student’s ID number (the screenshots also have annotations of my data entry processes).
I then made sure that the same information about a student's transcript is entered correctly in their PeopleSoft record, ensuring that the student’s information and transcript information (name of school, degree earned, printed graduation date) is consistent across databases, all linked by the student’s ID number. (Each transcript submitted gets its own entry into a student's PeopleSoft record. For example, if a student submitted two high school transcripts and three college transcripts, then there would be five total entries in that student's record.)Â
ImageNow pane. Red boxes indicate places to enter information.Â
ImageNow dropdown for transcript education type.Â
ImageNow dropdown for transcript institution location typeÂ
(Common entry codes: IN = International; GRAD = Graduate; DOM = Domestic (US) High School; PROS = Prospect, no application on file).Â
I then routed the digitized versions of those transcripts through ImageNow for subsequent processing needs, such as coordinating with OOTR’s transcript collections team to troubleshoot incomplete transcripts and send foreign transcripts to translation agencies. Through a process of exploration, trial and error, and refinement early on in my position, I created my “tablemat cheat sheet” Excel spreadsheet. This spreadsheet shows my personal decision making process in light of both admissions policy and internal OOTR student record organization, describing step by step how the evaluation of a transcript dictates how its relevant information is logged and saved to a student record, in a process conceptually akin to document indexing in a traditional academic database. The ways in how transcripts are linked to student records influence how such records are retrieved when students, the university’s admissions representatives, and co-workers request OOTR services. [Side note: Admissions representatives in this particular context liaise directly with prospective students, who may be graduating from high school, transferring in from another college or university, or seeking master degree program options. In this regard, OOTR works directly with the transcripts that students submit to fulfill their admissions applications, making admissions representatives one of my main client groups whom I interacted with on a near daily basis.]
Since the documents that were in OOTR’s HRM systems (PeopleSoft, SalesForce, and ImageNow) were student records, in terms of their profiles and stable, retrievable records of document relationships (which involve a student’s transcripts, scanned OOTR forms requesting various services, and other associated documents), my position were necessary to ensure that information was logged, linked and maintained properly for the purposes of student lookups. Part of my workflow in logging transcript information in student records involves being able to answer questions, troubleshoot transcript issues, and fulfill OOTR service requests from students, admissions representatives, and fellow OOTR co-workers. When fulfilling these requests, I retrieved student information using PeopleSoft and SalesForce through a process of searching for exact matches by student ID. If I had no student ID (which happened quite a lot, due to name changes, multiple profiles, alumni students forgetting their ID number, etc.), I conducted searches with partial demographic data such as last names, first names, date of birth, and the last four digits of a social security number. Such HRMs provide information for specific OOTR administrative purposes such as the duties that my supervisor held, from degree conferrals for graduation, change of academic major, academic probation, etc. In some respect, student lookups rely on many of the same principles of access and retrieval in document-centric databases.
In addition to the information that I logged about student transcripts, admissions representatives log information in SalesForce that are not present in PeopleSoft, adding another layer of information to a student’s whole record (and its associate metadata usable in SalesForce reports generation that summarizes a subset of student data for various internal organizational purposes not part of my roles and workflows). Even though the university’s IT department developed SalesForce as an external shell to display PeopleSoft content, it was still necessary for me in my position to navigate through both systems to retrieve relevant information about students in relation to their transcripts and potential system holds such as academic probations, financial holds, and FERPA (Family Educational Rights and Privacy Act) blocks. (FERPA only authorizes students to request and view their own academic information; third parties such as family and employers must provide specific paperwork to request authorization from the student and the school to see a student’s academic information.) I had to shift through this information at times when there were situations such as incomplete information on submitted transcripts (missing critical information like high school graduation dates, bachelor’s degree conferral dates, and academic coursework history). SalesForce also included student interactions with admissions representatives (e.g., calls, emails, program inquiries). In one sense, a student’s whole record was fractured across two different records in two different IR systems; in another sense, this organization system protects aspects of a student’s whole record, allowing only specific employees such as myself to access another level of information not easily available to other employees, namely the student’s digitized transcripts. And all this information is accessible by the same key, the student’s ID number.
PeopleSoft data entry steps to input transcript information based on educational institution, education level, and date received.Â
PeopleSoft data entry steps to input transcript information if there is a printed graduation date and a degree earned (High school diploma, Bachelor's degree, Associate's Degree, Master's degree, etc.).Â
In my time at OOTR, I learned first-hand the importance of information organization for a specific clientele, students and employees, and for a specific purpose, logging, maintaining, and retrieving students’ academic information for OOTR’s various functions and transcript services. Even though I was not far along in the MLIS program while working here, taking the iSchool’s core courses such as INFO 202 while working through understanding how OOTR structured student information in PeopleSoft, SalesForce, and ImageNow, doing both at the same time helped bring to life the principles of information organization and provided some defining experiences in personal and professional practice. I saw the need to organize and streamline my position’s workflows (something that I do naturally in my personal life), and developing the language to discuss database related queries and issues. I also learned how to articulate the value that an information professional brings to an organization as a result of my at-the-time newfound LIS perspective. This workplace provided me an environment to test out what I was learning in courses like INFO 202, helping me in developing my grasp of understanding and ultimately improving the processes in my OOTR position. To this end, I generated the office’s first, comprehensive “standard operating procedures” manual for this position to promote transparency of workflows and procedures for supervisors and co-workers. This manual was also filled with outlined procedures, notes, descriptions of special transcript scenarios (from their identification to how I resolved them, as a template for future action for similar situations), and annotated screenshots like the ones presented above. From all these experiences, I gained conceptual database know-how from inputting student records and LIS information organization concepts to help guide my efforts in asking IT the right kinds of questions to assist in solving various “wicked” database design and access problems. I am also grateful to have my now-former immediate supervisor acknowledge my growth and all these efforts in an unsolicited LinkedIn recommendation. My experiences here also motivated the content of a website that I built for INFO 240 and blog reflections for a final project in INFO 202. [See Competency H for the website and Competency B for the WordPress reflection blog.]
Future Directions
Information professionals in charge of maintaining and organizing documents and metadata in IR systems operate using the principles and standards of information organization effectively. They also know the design and structure of their IR systems, and leverage that knowledge effectively in meeting their user’s information needs. Such principles also apply to other systems that provide information in an organized fashion to a distinct user group, such as government websites. INFO 202’s site design project discussed here and the WebData Pro database group project provided a conceptual framework for me to comprehend the nuances of OOTR’s systems and find ways to navigate them effectively and make my workflows smoother over time. [For discussion of the database project, see Competency E.]
In regards to potential future workplaces, the current trends for information positions outside traditional LIS contexts that involve information organization and retrieval require critical thinking, a degree, a coding background (e.g., SQL, Python), comfortability with big data sets, ability to find trends using statistical analyses, and years of experience with specialized, industry standard systems (dependent on field). Though the concept of information and its organization is handled differently across disciplines and sectors, the LIS principles of information organization, retrieval, and management are similar to what I discussed in INFO 202. Moving forward, I see myself being able to translate and develop further my conceptual understanding of IR systems design and structure into future work positions that involve the handling of information in databases, to see the bigger picture of how records across contexts are retrieved, maintained, and updated.. I know that I can bring my perspective in organizing and streamlining my position’s workflows, along with a fine attention to detail, critical thinking, data entry experience of sensitive information, and a desire to learn about new systems and technologies to any future workplace, all grounded in LIS information organization principles.
The principles of information organization ultimately lead up to the individual search query and a user’s related interactions within an IR system. As Bates (1999) discussed before the turn of the millennium, the nature of the organization of information brings up three “big questions”:
The physical question: What are the features and laws of the recorded-information universe?
The social question: How do people relate to, seek, and use information?
The design question: How can access to recorded information be made most rapid and effective?
The answers to such questions will vary across contexts and their associated IR systems, with implications in the nature of search itself, especially searches through catalogued information in IR systems. The perspectives that emerge are grounded in the various, sometimes defining, experiences of the individual information professional, from the design of an IR system to the implementation of that system for a particular user group. Experience will always be necessary when searching for information, whether through traditional databases, non-traditional databases such as HRMs, and even the wider Web. Information professionals then need to be mindful about how IR systems are structured and the principles and standards involved in the organization of those systems’ information, as this plays a critical role in how they are able to meet their clients’ information needs.
References
Bates, M. (1999). The invisible substrate of information science. Journal of the American Society for Information Science, 50(12), 1043-1050. Retrieved from https://pages.gseis.ucla.edu/faculty/bates/substrate.html
Hall-Ellis, S. D. (2015). Organization information. In S. Hirsh (Ed.), Information services today: An introduction. [Kindle version] (pp. 139-148). Lanham, Maryland: Rowman & Littlefield Publishers.
Weedman, J. (2009). Design science in the information sciences. In M. J. Bates & M. N. Maack (Eds.), Encyclopedia of Library and Information Sciences, Third Edition (pp. 1493-1506). New York: Taylor and Francis. doi:10.1081/E-ELIS3-120043534
Weedman, J. (2016a). Lecture 1: Introduction to the course and overview of course concepts. In V. Tucker (Ed.), Information retrieval system design: Principles and practice, 2nd ed. (pp. 23-40). Ann Arbor, Michigan: AcademicPub.
Weedman, J. (2016b). Lecture 4: The design process. In V. Tucker (Ed.), Information retrieval system design: Principles and practice, 2nd ed. (pp. 219-232). Ann Arbor, Michigan: AcademicPub.