The language of engineering
Repos. CI/CD. Tech debt. P99 latency. If you work alongside engineers but came from marketing or DevRel, their vocabulary can feel like a foreign language. Here is a practical glossary of engineering terms, and why understanding them makes you better at your job.

You're in a sprint planning meeting. The engineering lead says, "We need to refactor the auth service and add rate limiting before we ship the new API. There's tech debt in the middleware, latency is creeping up at p99, and the deploy pipeline has been flaky since we moved to containers."
Everyone nods. You nod too. You understood maybe 30% of that.
If you came from marketing, DevRel, or product, you know this feeling. The language of engineering is a foreign language. And nobody hands you a dictionary when you walk into your first architecture review.
I wrote The language of sales to give marketers a map of sales vocabulary. I wrote The language of marketing to give non-marketering people insight into how we work. This post is the final piece of the puzzle. It covers the language of building.
These days, we're all builders. Use this opportunity to ask questions and learn how engineering works.You can stop being "the marketing person" and became a trusted partner. Whether your title is product marketing or DevRel, this vocabulary is the bridge.
What story does engineering tell?
Every engineering team answers four questions:
- What are we building?
- How do we build it?
- Is it working?
- Can it handle more?
That's it. The entire vocabulary of engineering exists to communicate progress on those four questions. Build. Ship. Monitor. Scale.
If you're a developer marketer or DevRel professional, you interact with engineering constantly. You write about their work. You market the things they build. You sit in their meetings and read their documents. The better you understand their language, the better you'll be at your job.
How code gets built
Software development is a workflow with specific steps and specific vocabulary at each step. Code moves from a developer's laptop to a live product through a series of checkpoints. Here is how that workflow works, from first commit to production deploy.
A repository (everyone calls it a "repo") is where code lives. Every change ever made is recorded. You can go back to any point in history and see exactly what the code looked like. GitHub, which hosts over 200 million repositories, is where most repos live today.
When an engineer wants to make a change, they create a branch. A branch is a copy of the code where you can make changes without affecting the main version. You work on the branch, and the original stays clean until you're ready.
When the changes are done, the engineer opens a pull request (PR). A PR is a proposal. It says, "Here are my changes. Please review them." Other engineers read the code, ask questions, suggest improvements, and eventually approve it. This process is called code review. It's quality control, and it's taken seriously. A careless code review can let bugs into production.
Once approved, the code gets merged into the main branch.
From there, automated systems take over. CI/CD stands for Continuous Integration and Continuous Deployment. CI runs tests automatically every time someone submits a PR. If the tests fail, the code doesn't get merged. CD takes tested code and pushes it to production automatically. Together, they're the assembly line of software development.
The build is the process of converting source code into something that runs. If the build fails, nothing ships.
Deploy means putting code into production so real users can use it. But you don't go straight to production. You deploy to staging first. Staging is a test environment that mirrors production. You catch problems here so customers never see them. Production is the real thing. What customers actually use. When someone says "it's in prod," they mean it's live.
Most engineering teams work in sprints. A sprint is a fixed time period, usually two weeks, where the team works on a specific set of tasks. The backlog is the list of everything that needs to be built, ordered by priority. Product managers usually own the backlog. Engineers pull work from it.
Every morning, the team has a standup. Short meeting. Everyone shares what they're working on and what's blocking them. It's called a standup because you're supposed to stand up so nobody gets comfortable and talks too long.
Velocity measures how much work a team completes per sprint. Teams use velocity for planning. Here's something important: engineers hate being measured by velocity as a performance metric. It's a planning tool, not a scorecard. If someone in leadership starts treating velocity like a quota, expect pushback.
What is the product architecture?
Architecture is how the product is structured. It determines what the system can do and how fast it can change.
An API (Application Programming Interface) is a structured way for software to talk to other software. When you hear "we're building an API," they're building the interface that developers interact with. If you're marketing an API product, understanding APIs is non-negotiable.
An SDK (Software Development Kit) is a package of tools, libraries, and documentation that makes it easier to build on an API. The API is the door. The SDK is the welcome mat, the key, and the instruction manual.
A CLI (Command Line Interface) is a text-based tool that developers use from the terminal. No buttons, no menus. Just text commands. Many developers prefer CLIs over graphical interfaces because they're faster and can be automated.
REST and GraphQL are two styles of building APIs. REST is the established standard. You have separate URLs for different data. GraphQL is newer. One URL, and you ask for exactly the data you want. Engineers have passionate opinions about which is better. You don't need to pick a side. You need to know the difference exists.
Microservices and monolith describe how you organize code. A monolith is one big application. Everything lives together. Microservices break it into small, independent services that talk to each other. Startups usually start as monoliths because it's simpler. They break apart later when they need to scale. Right now, the industry is swinging back toward monoliths because microservices add operational complexity that many teams underestimate.
Frontend is what users see. The website. The app. The buttons and forms. Backend is what runs on servers. The database, the API, the business logic. Fullstack means an engineer works on both.
A database is where data lives. SQL databases like Postgres and MySQL store structured data in tables with rows and columns. NoSQL databases like MongoDB and Redis store data differently. Postgres is everywhere right now. If your company uses Supabase, you're using Postgres.
Schema is the structure of your database. What tables exist, what columns they have, how they relate to each other. Changing the schema is called a migration. Migrations are nerve-wracking because a bad one can break things in production.
Cloud means your software runs on someone else's servers. AWS, Google Cloud, Azure. On-premise (on-prem) means you run it on your own servers. Most new companies are cloud-native, meaning they were born in the cloud and have never touched a physical server.
Containers package software so it runs the same everywhere. Docker creates containers. Kubernetes (often shortened to K8s) manages lots of containers at scale. Kubernetes is notoriously complex, and engineers have strong opinions about whether you actually need it. Most startups don't.
Serverless is cloud computing where you don't manage servers at all. You write functions, and the cloud provider runs them. AWS Lambda, Vercel Functions, Supabase Edge Functions. You pay per execution, not per server. It scales automatically. Serverless is popular for APIs and webhooks because you don't have to think about infrastructure.
How do engineers measure quality?
Quality is what separates software that works from software that stays working. Engineers measure it in milliseconds, percentiles, and nines. These metrics drive decisions about what to build, what to fix, and what to leave alone.
Latency is how long it takes to respond to a request. Measured in milliseconds. But averages are misleading, so engineers use percentiles. P50 is the median. P95 means 95% of requests are faster than this number. P99 is the slowest 1%. Engineers obsess over p99 because that's where the worst user experiences hide. Your average latency might look fine while 1% of your users are having a terrible time.
Throughput is how many requests the system handles per second. Latency is speed per request. Throughput is volume. You need both to be healthy.
Availability (or uptime) is what percentage of the time the system is working. Measured in "nines":
- 99.9% (three nines) = 8.7 hours of downtime per year
- 99.99% (four nines) = 52 minutes of downtime per year
- 99.999% (five nines) = 5.2 minutes per year
Each additional nine is exponentially harder and more expensive to achieve.
Three related terms show up in every reliability conversation. An SLA (Service Level Agreement) is a contractual promise to customers. "We guarantee 99.9% uptime." Miss it and you owe credits or refunds. An SLO (Service Level Objective) is an internal target, usually stricter than the SLA. If your SLA promises 99.9%, your SLO might be 99.95%. An SLI (Service Level Indicator) is the actual measurement. The real number. SLI feeds into SLO, which is protected by SLA. The measurement, the goal, and the promise.
Error rate is what percentage of requests fail. Even small error rates matter at scale. A 0.1% error rate sounds tiny until you realize that's 10,000 failed requests per 10 million.
When something breaks in production, that's an incident. Severity levels range from SEV1 (everything is down, all hands on deck) to SEV4 (minor issue, fix it when you can). After the incident is resolved, the team writes a postmortem. A postmortem is a detailed analysis of what happened, why, and how to prevent it next time. Good postmortems are blameless. They focus on systems, not people.
On-call means engineers take turns being the person who gets paged when something breaks. Usually in weekly rotations. On-call can happen at 3 AM on a Saturday. It's stressful and is a frequent source of burnout. It's why engineering teams care so deeply about reliability. Every reliability investment means fewer midnight pages.
What does scaling mean?
Scaling is the engineering discipline of handling growth. More traffic, more data, more users. When a product goes from 1,000 requests per second to 100,000, the system either scales or it breaks. These are the concepts that determine which outcome you get.
Load balancing distributes traffic across multiple servers so no single server gets overwhelmed. Like opening more checkout lanes at a grocery store when the lines get long.
Caching stores frequently accessed data in fast storage (memory) so you don't have to fetch it from the database every time. Redis is the most common cache. Caching sounds simple but is notoriously tricky. There's a famous quote in computer science: "There are only two hard things in computer science: cache invalidation and naming things."
A CDN (Content Delivery Network) is a network of servers distributed worldwide that serves static content (images, CSS, JavaScript) from the server closest to the user. Vercel, Cloudflare, and Fastly are popular CDNs. If your website loads fast in New York but slow in Tokyo, you probably need a CDN.
Horizontal scaling means adding more servers. Vertical scaling means making the existing server bigger. Horizontal is usually preferred because it has no ceiling. You can always add another server. Vertical hits a wall when you max out the biggest machine available.
Rate limiting restricts how many requests a user or application can make in a given time period. "You can make 1,000 API calls per minute." Rate limiting prevents abuse and protects the system from being overwhelmed. If you're marketing an API, rate limits will be one of the first things developers ask about.
What are the key engineering trade-offs?
Every engineering decision has a cost. Ship faster and you accumulate debt. Maintain backwards compatibility and you move slower. Add features and the project grows. Trade-offs are the reason engineering teams say "no" and the reason they ask for time to pay down what they owe.
Technical debt (tech debt) is shortcuts taken to ship faster that will need to be fixed later. Like taking out a loan. Sometimes it's worth it. You ship the feature now, and you'll clean up the code next quarter. But tech debt accumulates interest. Ignore it long enough and everything slows down. When your engineering team says they need a "tech debt sprint," they're paying down the loan.
Refactoring is rewriting code to be better structured without changing what it does. From the outside, nothing looks different. On the inside, everything is cleaner and faster to work with. Refactoring is how you pay down tech debt.
Breaking change is a change that forces other software to update. If you change the API in a way that stops existing integrations from working, that's a breaking change. Developers hate breaking changes. If you're in DevRel, this is one of the most important concepts to internalize. It means you can't always "just add a feature." Sometimes the foundation needs to change first, and that affects everyone who built on top of it.
Backwards compatibility means new versions work with old code. It's hard to maintain, but it's how you keep developers' trust. When you read that an API is "v2 backwards compatible with v1," it means developers using v1 won't need to change anything.
Feature flags are switches that let you turn features on or off without deploying new code. Ship the code, but only show the feature to 5% of users. If something goes wrong, flip the switch. Feature flags are how companies do gradual rollouts. They're also how your engineering team will say yes to launching a feature even if they're not 100% sure about it.
Scope creep is when a project keeps getting bigger as people add requirements. "Can we also add..." is the phrase that triggers it. Scope creep is the enemy of shipping. When your engineering lead looks stressed and says the project is "growing," this is usually what they mean.
How engineers communicate
Engineers have their own culture of written communication. Understanding it helps you participate.
An RFC (Request for Comments) is a document proposing a technical decision. An engineer writes it, shares it with the team, and others comment. It's a democratic process. Big decisions get RFCs.
A design doc explains how you plan to build something before you build it. Design docs save time because they surface problems before anyone writes code. If you want to understand what the engineering team is about to build, read the design doc.
An ADR (Architecture Decision Record) records why a specific technical decision was made. ADRs are important for future engineers who join the team and wonder, "Why did we do it this way?" If you're writing content about your product's architecture, ADRs are a gold mine.
LGTM means "Looks Good To Me." It's code review approval. When you see it in a PR, the reviewer is saying, "Ship it."
Ship it itself is a cultural phrase. A team that "ships" moves fast and gets things into production. "Ship it" is both approval and a mindset.
Bikeshedding means spending disproportionate time on trivial decisions while ignoring important ones. The name comes from a thought experiment about a nuclear power plant committee that spent hours debating the color of a bike shed while rubber-stamping the reactor design. Every engineering team bikesheds. The good ones catch themselves doing it.
Yak shaving is needing to solve a chain of smaller problems before you can solve the actual problem. "I need to fix the API, but first I need to update the library, but first I need to fix the build pipeline, but first I need to upgrade the container image..." Each step is necessary. None of them are the thing you sat down to do.
When engineering vocabulary makes the news
On July 19, 2024, CrowdStrike pushed a content configuration update to its Falcon sensor software on Windows machines. The update contained a defect. Within hours, roughly 8.5 million Windows computers worldwide crashed with blue screens. Airlines grounded flights. Hospitals delayed procedures. Banks went offline. 911 call centers in multiple US states lost service.
Let's read that incident through the vocabulary we've covered.
CrowdStrike's CI/CD pipeline pushed a configuration update to production. The update was not a code change in the traditional sense. It was a content update to detection rules, which bypassed the standard testing process. There was no adequate staging step for this particular update type. The defect made it straight to prod.
The update triggered a logic error in the CrowdStrike kernel driver, causing a system crash. Because the Falcon sensor runs at the operating system kernel level, the crash was not recoverable through normal means. Affected machines entered a boot loop. Latency wasn't the issue. Availability was. Millions of machines hit zero uptime simultaneously.
CrowdStrike's SLAs with enterprise customers came under immediate scrutiny. The incident was a SEV1 across every affected organization. Postmortems from CrowdStrike and affected companies pointed to the same root cause: a content update path that didn't have the same validation gates as code deployments.
The fix required manual intervention on each affected machine. Administrators had to boot into Safe Mode and delete a specific file. At enterprise scale, with thousands or tens of thousands of machines per organization, this took days. The lack of automated recovery was itself a form of technical debt in CrowdStrike's architecture.
The estimated worldwide financial damage exceeded $10 billion. Delta Air Lines alone claimed over $500 million in losses and sued CrowdStrike.
One vocabulary term ties the whole incident together: breaking change. CrowdStrike pushed a change to production that broke every system it touched. The change was irreversible without manual work. And the blast radius was global.
If you've internalized the vocabulary in this post, you can read any engineering incident report and understand what happened. That's the point.
Quick reference
For those moments when you need a fast definition:
| Term | What it means |
|---|---|
| Repository (repo) | Where code lives, with full version history |
| Branch | A copy of the code for making changes without affecting the main version |
| Pull request (PR) | A proposal to merge code changes, reviewed by other engineers |
| Code review | Other engineers reading and approving your code before it's accepted |
| Merge | Combining a branch into the main code |
| CI/CD | Automated systems that test code (CI) and deploy it to production (CD) |
| Build | Converting source code into something that runs |
| Deploy | Putting code into production for real users |
| Staging | A test environment that mirrors production |
| Production | The live environment that customers use |
| Sprint | A fixed time period (usually two weeks) for completing a set of tasks |
| Backlog | The prioritized list of everything that needs to be built |
| Standup | A short daily meeting to share progress and blockers |
| Velocity | How much work a team completes per sprint |
| API | A structured way for software to talk to other software |
| SDK | Tools and libraries that make it easier to build on an API |
| CLI | A text-based tool developers use from the terminal |
| REST / GraphQL | Two styles of building APIs |
| Microservices / monolith | Small independent services vs. one big application |
| Frontend / backend | What users see vs. what runs on servers |
| Database | Where data lives (SQL for structured, NoSQL for flexible) |
| Schema | The structure of your database |
| Migration | Changing the database schema |
| Cloud / on-prem | Running on someone else's servers vs. your own |
| Containers / Docker | Packaging software so it runs the same everywhere |
| Kubernetes (K8s) | A system for managing containers at scale |
| Serverless | Cloud computing where you don't manage servers |
| Latency | How long a system takes to respond to a request |
| P50 / P95 / P99 | Percentile measurements of latency |
| Throughput | How many requests a system handles per second |
| Availability / uptime | Percentage of time the system is working, measured in "nines" |
| SLA | A contractual promise about uptime or performance |
| SLO | An internal target, usually stricter than the SLA |
| SLI | The actual measurement of performance |
| Error rate | Percentage of requests that fail |
| Incident | When something breaks in production |
| SEV1 through SEV4 | Incident severity levels, from catastrophic to minor |
| Postmortem | A blameless analysis of what went wrong after an incident |
| On-call | Engineers taking turns being paged when something breaks |
| Load balancing | Distributing traffic across multiple servers |
| Caching | Storing frequently accessed data in fast memory |
| CDN | Servers worldwide that serve content from the closest location |
| Horizontal / vertical scaling | Adding more servers vs. making one server bigger |
| Rate limiting | Restricting how many requests can be made in a time period |
| Technical debt | Shortcuts that ship faster now but cost time later |
| Refactoring | Rewriting code to be better structured without changing behavior |
| Breaking change | A change that forces other software to update |
| Backwards compatibility | New versions working with old code |
| Feature flags | Switches to turn features on or off without deploying |
| Scope creep | A project growing beyond its original requirements |
| RFC | A document proposing a technical decision for team discussion |
| Design doc | A plan explaining how something will be built |
| ADR | A record of why a specific technical decision was made |
| LGTM | "Looks Good To Me." Code review approval |
| Bikeshedding | Spending too much time on trivial decisions |
| Yak shaving | Solving a chain of prerequisites before the real problem |
Vocabulary is the starting point. The real skill is seeing how repo, deploy, incident, and refactor connect into a single story about how software gets built. Every concept in this post maps to one of those four questions: What are we building? How do we build it? Is it working? Can it handle more? That's the language of engineering.

Developer marketing expert with 30+ years of experience at Sun Microsystems, Microsoft, AWS, Meta, Twitter, and Supabase. Author of Picks and Shovels, the Amazon #1 bestseller on developer marketing.

Want the complete playbook?
Picks and Shovels is the definitive guide to developer marketing. Amazon #1 bestseller with practical strategies from 30 years of marketing to developers.