How can we better determine an editor’s skill before hiring?

Is deeper and deeper testing really the answer?

We’ve got a problem in tech.

We require a lot of our editors, more so than is usual in other disciplines.

The editors who work in tech need to be clear and logical thinkers, but also nimble and creative. They often work in several content types, for various audiences. They may be thinking a brand-new project through from the beginning or evaluating one at an early milestone. They often participate in developing the very guidelines that they and the writers will be following. They may also develop templates, models, annotated samples, or other resources. When it comes to the editing itself, they may be called upon to restructure or rewrite, to tweak syntax or diction or tone, or simply to sort out the caps and sweep the commas into place. In short, tech editors are many types of editor in one. Nor are they told which role to play when, but most typically must themselves analyze the writing, judge what is needed, and determine how best to accomplish those tasks (insofar as is possible) in the given timeframe. Then, whatever the level of edit, as there will typically be no one else to do so, they’ll also be proofing their own work.

That’s some spectrum. How does one test for all of this beforehand?

While there are scores of publishing-type editing tests to choose from, these focus on mechanical style, testing the rote minutiae of a particular style guide, typically The Chicago Manual of Style. Such tests work for publishing houses, where copyediting is generally applied in a completely independent cycle, at a defined stage in the overall editing schedule, by editors who only copyedit. But these tests do not work well for our environments, where “copyediting” is generally not distinguished from the other editing tasks, not handled in an entirely separate stage, not done by editors who exclusively copyedit. Nor would such tests necessarily be useful even for assessing the copyediting skills of a technical editor: in our work, much of Chicago (or any other standard style guide) doesn’t apply and an entire complex of other, highly specific terminology and concerns does. That’s why we end up with such detailed in-house writing and style guides.

The standard editing tests are thus wholly inadequate for our purposes. Such tests don’t assess for the copyediting skills we need: someone with very sharp eyes and detailed knowledge of the sorts of issues that crop up in our environments might still not do very well on a test designed for publishing-industry concerns. And more importantly, such tests don’t screen for the deeper, more fundamental skills that tech editors apply daily. The ability to see in a particular context, sometimes a relatively new context, ways in which what’s written is not inherently consistent, logical, or comprehensive. The ability to see where a particular document or set of topics or UI text is not organized or developed well for a particular audience or purpose. The ability to articulate issues and potential solutions clearly to the writer and other stakeholders, sometimes also to rally them to the cause. And a range of other tasks. Tech editors move fluidly between skills traditionally associated with developmental editing, line editing, and copyediting, with a little acquisitions editing thrown in. As testimony to that work, our in-house style guides travel well beyond issues of mechanics to canvass central characteristics of structure and content, sometimes for different environments, distinct purposes, varied audiences.

Ah, you say, but we’re hiring a contractor and we’ll be going through an agency. This turns out not to be much of a boon. All those agencies typically do is collect resumes. They look for resumes that list the very same skills you’ve included in the job description, but they’re not confirming those skills. That’s up to you. In this sense, an agency recruiter functions the way an HR recruiter does: simply applying the first filter, making sure that you’re looking at applicants who look good on paper.

Unfortunately, that’s an uncertain measure.

To vet editor candidates, it seems that tech departments must look instead to developing their own tests. But developing tests, and evaluating them, is itself a specialized skill. Not to mention time-consuming. What can we do to ensure that our tests are successfully filtering for the skills we need? And how can we streamline this time-intensive process? This has been the focus of the conversation, in recent years, of many of the groups I’ve been with. But could it be that in this hunt for ever better, ever more revealing, ever more predictive tests, we’re becoming ever more entangled in clearing the wrong path?

When I was teaching editing and writing classes, I got to know a potential editor’s capabilities and approach in the natural course of things. Assignments, in-class discussion, quizzes, tests — together, these served to paint a complete portrait. And similarly, whenever first working on the job with a new hire, I typically get to know that person’s capabilities and work style within a few short weeks.

Can we design tests, or indeed a hiring process, that adequately convey this same sort of information? That tell us how an editor will work with other editors, with writers, with subject matter experts? That tell us how sharp those eyes are, across a range of issues, document after document? That tell us how flexible her sensibilities are, how deep her knowledge, how able her explanations? That tell us the rhythm she’ll settle into, how she’ll juggle competing priorities? Or should we acknowledge that nothing we can discover in a test, a couple of tests, or even a series of interviews, no matter how clever the questions, can give us the depth and richness of information that actually working with someone does.

Should we think, that is, about revamping our process altogether?

Should we acknowledge that nothing beats a full-scale assignment — not the smaller-scoped test assignments we devise, not the samples we ask to see — for telling us whether an individual is a good match for a role or not? And when you come right down to it, that nothing beats working alongside that person for two to three weeks, day in and day out, on a series of assignments.

This is my question. Should we follow the lead of the publishing industry and pay candidates to work for a short, probationary period, while we assess the fit with the only test that will really tell us what we need to know: working with someone. So as to better understand in a deep and nuanced way how a person actually works and what, in her writing and editing, she is capable of — before actually hiring her.

I’d like to start a conversation. Who’s in?

________________________

Published originally in Corrigo, the official publication of the STC Tech Editing SIG.