The Process of Social Research | Kenneth D. Bailey & H. Russell Bernard

Table of Contents

1. Introduction: What is Social Research?

Core reading:

The Process of Social Research - \What is Social Research

Social research is, at its heart, the process of asking questions about human society and searching for answers in a systematic, organized way.  

It’s what we do when we want to move beyond our own assumptions and gut feelings to truly understand how people behave, why they form groups, or how our institutions function.  

Unlike casual conversation or a hunch, social research follows a disciplined process of inquiry to describe, explain, or even predict social phenomena. 

Bailey emphasizes that this isn’t just about following a set of boring rules; it’s about making a plan. Without a thoughtful plan and careful preparation, your research can quickly veer off course.  

You might end up with a mountain of information that doesn’t actually answer your original question. This is where what Bernard calls methodological rigor comes in. Rigor doesn’t mean being stiff or overly complex; it means being thorough, honest, and transparent about how you collected your data and why you trust it. 

Finally, the best social research strikes a beautiful balance between creativity and discipline. The “discipline” is the structure—the surveys, the ethics, the coding, the statistical tests that ensure your work is sound.  

But “creativity” is a secret sauce. It’s the creative leap it takes to formulate an interesting question, the cleverness to design a way to observe people without disturbing them, and the imagination needed to weave individual stories into a broader social theory. You have to be an artist and an accountant at the same time. 

2. Steps in the Research Process (Bailey) 

Think of the research process like planning a road trip. You wouldn’t just start the car and drive aimlessly. You need a destination, a map, a way to track your progress, and finally, a story to tell when you get home. Bailey breaks this journey down into six manageable steps. 

Problem identification: choosing a researchable question. 

This is the “where are we going?” step. You start with a general curiosity—say, “Why are people leaving rural towns?”—but then you must sharpen it into a specific, researchable question.  

The bad question is “What is wrong with society?” A good, researchable question is “Does the lack of high-speed internet access predict out-migration among young adults in rural Kansas?” Not every interesting question can be answered with data, so your job here is to find one that can. 

Conceptualization: clarifying concepts and variables. 

Now you get clear on what you actually mean. Words like “poverty,” “happiness,” or “community involvement” sound nice, but they’re fuzzy. Conceptualization is the process of taking those fuzzy ideas and turning them into something you can measure.  

For example, if you’re studying “academic success,” do you mean GPA? Graduation rates? Test scores? You also identify your variables—the things that change (like study habits) and the outcomes you care about (like final grades). This step prevents you from comparing apples to oranges later. 

Designing the study: qualitative, quantitative, or mixed methods. 

Here’s where you choose your game plan. Will you count things (quantitative), like how many hours people study? Or will you explore meanings and experiences (qualitative), like interviewing students about what studying feels like? Or—and this is often best—will you do a bit of both (mixed methods)? Your question drives this choice. “How many?” leads to numbers. “Why?” or “How?” leads to stories. Bailey reminds us that no design is inherently superior; each just serves a different purpose. 

Data collection: surveys, interviews, observations, archival sources. 

This is the hands-on, get-your-hand-dirty phase. You go out and actually gather your evidence. You might hand out a survey to 500 people, sit down for deep one-on-one interviews, quietly observe behavior in a coffee shop, or dig through old newspapers and government records (archival sources).  

Each method has trade-offs. Surveys are great for breadth; interviews are great for depth. Observation catches what people actually do (not just what they say they do). The key is matching your method to your question. 

Analysis and interpretation: linking findings to theory. 

You’ve got a pile of data now—maybe spreadsheets full of numbers or pages of interview transcripts. Analysis is the messy, thoughtful work of making sense of it all. You look for patterns, differences, surprises, and contradictions.  

But here’s the crucial part: interpretation means asking, “So what?” You take those patterns and link them back to the bigger ideas (theories) you started with. Do your findings support what previous researchers thought? Do they challenge it? This is where raw information turns into actual knowledge. 

Reporting results: writing, presenting, and disseminating knowledge. 

The final step is often the most neglected, but it’s also the most generous. You owe it to the people you studied and to other researchers to share what you learned.  

Reporting isn’t just about writing a dry academic paper (though that’s one way). It can mean giving a talk, creating a fact sheet for a community group, publishing an article, or even recording a podcast. The goal is to translate your hard-earned insights into a form that others can actually use. After all, research that sits in a drawer doesn’t change anything. 

3. Preparing for Research

If Bailey’s steps are the road trip plan, Bernard’s preparation chapter is the stuff you do before you even turn the key in the ignition.  

It’s the backpacking checklist, the phone calls ahead, and the honest conversation with yourself about what you’re about to walk into. Bernard argues that skipping these steps doesn’t just make research harder—it can make it impossible or even harmful. 

Defining objectives: what do you want to know? 

This sounds obvious, but it’s surprisingly easy to mess up. Bernard says you need to move from a vague interest (“I want to study street vendors”) to a crisp, concrete objective (“I want to understand how street vendors in Bangkok negotiate space with police and shop owners”).  

A good objective is specific enough that you could explain it to a stranger at a bus stop. It also forces you to ask: Are you trying to describe something? Explain the cause? Count how often does something happen? Your objective becomes your compass when you inevitably get lost in the details later. 

Selecting a site or population: practical and ethical considerations. 

Where or among whom will you do your research? This decision is never purely academic. On the practical side, you need to be honest: Can you actually get there? Do you speak this language? Will people have time to talk to you? But there are also deep ethical considerations.  

Bernard warns you against choosing a population just because it’s “exotic” or convenient for you. You also have to ask: Will my presence here cause harm? Am I studying a vulnerable group (children, refugees, prisoners) that requires extra protection? Sometimes the “most interesting” site is also the most ethically complicated—and that doesn’t automatically mean you should avoid it, but it does mean you need to think much harder. 

Gaining access: permissions, rapport, and trust-building. 

This is the art of getting people to let you in. “Access” can mean a formal letter from a government agency or an ethics board. But more often, it means something softer: rapport. Rapport is that fragile, human sense of “I trust you enough to be honest with you.” Bernard emphasizes that trust isn’t a checkbox—it’s built slowly, awkwardly, and often through small gestures. You show up on time.  

You keep your promises. You listen more than you talk. You explain clearly why you’re there and what you’ll do with people’s stories. In many communities, you don’t just need permission from a leader; you need the quiet consent of everyday people who have every right to ignore you. 

Logistics: funding, time management, equipment, and safety. 

This is the unglamorous, essential stuff that sinks more research projects than bad ideas ever do.  

Funding: Do you have enough money to pay for travel, translation, transcription, or participant incentives?  

Time management: Bernard warns that almost everyone underestimates how long things take—gaining access alone can take months.  

Equipment: Will your recorder fail in humid weather? Do you have backup batteries and paper surveys when the Wi-Fi dies? And finally,  

safety: Your safety and your participants’ safety. Are you going somewhere politically unstable? Is your research topic (like drug use or political dissent) dangerous for people to discuss? Logistics isn’t boring paperwork; it’s the difference between finishing your study and abandoning it halfway through. 

Ethical preparation: informed consent, confidentiality, researcher responsibility. 

Bernard treats ethics not as a hurdle but as a core part of preparation.  

Informed consent means people agree to participate with their eyes open: you tell them what the research is about, what risks exist, that they can quit anytime, and what you’ll do with the results. No deception, no fine print.  

Confidentiality means protecting identities—using pseudonyms, encrypting files, and never linking someone’s name to their quote if they didn’t explicitly agree.  

But Bernard adds something deeper: researcher responsibility. This means you don’t just parachute in, collect data, and leave. You have a responsibility not to exploit people’s openness. You don’t promise benefits you can’t deliver. And if you witness harm or illegality, you need to have thought through your moral obligations before it happens. Ethical preparation is asking hard questions when you still have time to change your plan. 

4. The Literature Search

Let’s be honest: many new researchers have read the literature review. They think it’s just busywork—reading a bunch of dusty old studies to prove you did your homework. But Bernard completely flips that script. He argues that a good literature search isn’t a chore you suffer through before the real research begins. It’s actually a conversation. You’re walking into a room full of smart people who’ve been thinking about your topic for years, and your job is to listen carefully before you open your mouth. 

Purpose of literature review: situating research into existing knowledge. 

Why bother reading what others have already done? Two big reasons.  

First, humility: someone else has probably already figured out part of what you’re trying to learn. Why reinvent the wheel?  

Second, originality: you need to know what’s already known, so you can figure out what’s not known—that’s you’re opening. Bernard says a good literature review situates your work in the existing landscape. It answers questions like: Has anyone asked this before? What did they find? Where did they disagree? And most importantly: What’s the next logical question that nobody has answered yet? Without this step, you might spend months “discovering” about something that was published ten years ago. 

Sources: books, journal articles, dissertations, reports, online databases. 

Not all sources are created equal, and Bernard encourages you to be strategic.  

Journal articles are usually the gold standard—they’re peer-reviewed, current, and focused.  

Books give you deeper, more developed arguments, but they can be a few years out of date by the time they’re published.  

Dissertations are hit-or-miss (some are brilliant; others are… exercises), but they often explore very niche topics you won’t find elsewhere.  

Reports from governments or NGOs can be incredibly useful for applied questions.  

And online databases (like Google Scholar, JSTOR, or PubMed) are your fishing nets. The key is knowing which net to throw were. Bernard’s advice: start broad, then get specific. Don’t just type a question into Google and call it a day. 

Strategies: keyword searches, citation chaining, using bibliographies. 

This is where the detective work begins.  

Keyword searches are obvious but tricky. If you’re studying “fear of crime,” try “perceived risk,” “safety perception,” or “anxiety about victimization”—different authors use different words for the same idea.  

Citation chaining is Bernard’s secret weapon. Here’s how it works: find one really good, recent article on your topic. Then look at who it cites (going backward) and who has since cited it (going forward). It’s like following a trail of breadcrumbs. And never ignore  

bibliographies—the reference list at the end of a good book or article is a treasure map. You can find five more useful sources just by scanning the works someone else already did the hard work of gathering. 

Evaluating sources: credibility, relevance, and theoretical contribution. 

Here’s where you put your critical thinking hat on. Just because something is published doesn’t mean it’s good. Bernard says to ask tough questions.  

Credibility: Who wrote this? Are they affiliated with a real institution? Did it go through peer review? Is the journal respected or sketchy?  

Relevance: Does this actually speak to your question, or is it tangentially related? A study of Japanese factory workers might not help you understand Brazilian street vendors.  

Theoretical contribution: Does this source just report facts, or does it help you think about your topic in a new way? The best sources don’t just give you answers—they sharpen your questions. And be especially careful with internet sources. Anybody can put up a website. Bernard’s rule of thumb: if you can’t figure out who’s responsible for the information and why they’re sharing it, walk away. 

Outcome: refining your research questions and hypotheses. 

This is the payoff. After you’ve read deeply, you don’t just have a list of citations—you have a sharper, smarter set of questions. Maybe you started out wanting to ask, “Why do people join cults?” But after reading literature, you realize that the question is too big and vague.  

Now you can ask something more precise: “Do people who report a recent major life loss join high-control groups at higher rates than those who don’t?” Or maybe the literature reveals a contradiction—two studies found opposite things—and your research can help untangle that puzzle. 

 Bernard insists that a good literature search doesn’t just inform your questions; it transforms them. You go from a beginner’s naive curiosity to a researcher’s focused, informed, and genuinely original inquiry. That’s not busy with work. That’s the beginning of a real discovery. 

5. Linking Theory and Method 

This is where a lot of research either clicks beautifully or falls apart. Think of theory as your pair of glasses—it shapes how you see the world and what you pay attention to.  

The method is your toolkit—the actual tools you use to gather evidence. The trick is making sure your glasses, and your toolkit actually work together. 

How literature review connects to theoretical frameworks. 

Remember that literature search we just talked about? It doesn’t just give you facts to cite. It introduces you to existing theoretical frameworks—basically, different lenses that previous researchers have used to make sense of similar problems.  

For example, if you’re studying why some kids drop out of school, one theory might emphasize economic factors (poverty, needing to work). Another theory might focus on social belonging (feeling alienated by teachers or peers). Your literature review helps you figure out which lens (or combination of lenses) makes the most sense for your question.  

Bernard argues that you don’t have to invent a whole new theory from scratch—you stand on the shoulders of people who came before you. But you do need to be explicit about which framework you’re using, because that framework will guide everything else. 

Choosing methods that align with research goals. 

Here’s where theory and method shake hands. Once you’ve chosen your theoretical lens, you need methods that actually fit that lens.  

If your theory says poverty is a matter of measurable income thresholds and unemployment rates, then you probably want quantitative methods—surveys, census data, statistics. You’re counting things.  

But if your theory says poverty is about lived experience, shame, and social exclusion, then you probably want qualitative methods—interviews, participant observation, life histories. You’re trying to understand the meaning. Neither is better. They’re just answering different questions.  

Bailey warns that researchers often fall in love with a method first (say, surveys) and then awkwardly try to cram every question into that method. That’s backwards. Your research question and your theoretical framework should choose your method—not the other way around. 

Avoiding “method-driven” research without clear theoretical grounding. 

This is a common trap, especially for beginners. You learn how to do one thing—maybe you get really good at running statistical tests or conducting interviews—and suddenly every research question starts to look like a nail for your shiny hammer. Bernard calls this method-driven instead of question-driven.  

The danger is that you end up collecting lots of tidy data that doesn’t actually illuminate anything important. You can have a perfectly executed survey that answers a boring or meaningless question. Or you can do beautiful interviews that are rich with quotes but never connect back to any bigger idea.  

Theory is what saves you from that. It forces you to ask, “So what? Why does this matter beyond my own curiosity?” Without theoretical grounding, you’re just reporting observations. With it, you’re contributing to a larger conversation about how the social world works. 

6. Practical Challenges in Research 

Let’s be real for a moment. Every textbook makes research sound clean and linear—like you follow the steps, and everything works. That’s not how it actually goes. Real research is messy, unpredictable, and often frustrating. Bernard and Bailey, both acknowledge this honestly. Here are the practical challenges you will almost certainly face. 

Time and resource constraints. 

You will never have as much time or money as you want. Never. The perfect study—with a huge sample, multiple years of follow-up, and unlimited travel—exists only in grant proposals. In real life, you have deadlines, a budget that’s too small, and probably other responsibilities (a job, classes, family). The key isn’t to pretend that constraints don’t exist. It’s to design a study that is feasible given what you actually have.  

Bernard advises being ruthlessly honest during the planning phase. Can you really interview 100 people when transcription alone takes three hours per interview? Can you really observe a community for six months when you only have two weeks of vacation? Sometimes the most responsible choice is to scale back your ambitions and do a smaller, simpler study extremely well rather than a sprawling, messy one poorly. 

Cultural sensitivity and positionality of the researcher. 

This is about recognizing that you don’t walk into the field as a blank, neutral observer. You bring your own background—your race, class, gender, education, nationality, and life experiences. That’s your positionality. A wealthy outsider studying poverty will be seen differently than someone who grew up in that community.  

A male researcher asking women about their health will face different dynamics than a female researcher would. Bernard insists that you can’t eliminate these differences, but you can and must acknowledge them honestly.  

Cultural sensitivity means learning enough about the community’s norms, history, and power structures that you don’t accidentally cause offense or harm. It means asking, “Who benefits from this research?” and “Am I representing people fairly, or am I just using their stories for my own career?” This isn’t a box to check. It’s an ongoing, uncomfortable, essential reflection. 

Dealing with unexpected fieldwork obstacles. 

Something will go wrong. It always does. Your recording device will break down. The person you were supposed to interview tomorrow will cancel and then leave town. A political protest will shut down access to your field site. A key document will turn out to be lost. Bailey says the mark of a good researcher isn’t that nothing goes wrong—it’s how you respond when it does.  

The best preparation is having a Plan B, Plan C, and Plan D. Bring backup equipment. Build extra time into your schedule. Have alternative participants in mind. And perhaps most importantly, develop a sense of humor and patience. Some of the best data comes from unplanned moments—the casual conversation after an interview ends; the unexpected observation when your original plan fell through. Stay flexible. 

Maintaining rigor while adapting to real-world conditions. 

This is the balancing act. On one hand, you want rigor—honesty, systematic procedures, and transparency to your methods. You don’t want to just make stuff up or cherry-pick evidence that supports your pet’s ideas.  

On the other hand, real-world conditions are never perfect. You can’t always get a random sample. People don’t always answer questions the way you hope. You might have to change your interview guide halfway through because you’re learning new things.  

Bernard’s wisdom is this: rigor doesn’t mean rigidity. It means being honest about the compromises you made and explaining how those compromises might affect your conclusions. If you had to switch from random sampling to snowball sampling, say so—and explain why.  

If your interview recording failed and you had to rely on notes, acknowledge that limitation. Rigor is transparency, not perfection. A study that honestly admits its flaws is far more trustworthy than one that pretends those flaws don’t exist. 

Leave a comment