[{"content":"Overview Diagram Builder is an open-source code visualization platform built to make large codebases navigable. It parses repositories into a dependency graph, renders them in both 2D and 3D layouts, and exports to multiple formats.\nWhat It Does Repository parsing — scans source files and builds a typed dependency graph of modules, classes, and functions 2D and 3D layouts — force-directed and radial-BFS algorithms render the graph at different levels of abstraction Semantic tiered views — zoom from architecture-level down to individual module dependencies Export — outputs to JSON, SVG, and image formats for documentation and diagramming tools How It Was Built Built using an agent-driven development workflow with Claude Code and the Anthropic SDK. Most of the core graph logic and Three.js rendering pipeline was developed through iterative agent sessions, with human oversight on architecture decisions and API design.\nTech Stack TypeScript + Node.js — parser and graph engine React — UI and controls Three.js — 3D graph rendering Neo4j — graph database for persistent storage and complex queries ","permalink":"http://brianmehrman.com/portfolio/diagram-builder/","summary":"\u003ch2 id=\"overview\"\u003eOverview\u003c/h2\u003e\n\u003cp\u003eDiagram Builder is an open-source code visualization platform built to make large codebases navigable. It parses repositories into a dependency graph, renders them in both 2D and 3D layouts, and exports to multiple formats.\u003c/p\u003e\n\u003ch2 id=\"what-it-does\"\u003eWhat It Does\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eRepository parsing\u003c/strong\u003e — scans source files and builds a typed dependency graph of modules, classes, and functions\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e2D and 3D layouts\u003c/strong\u003e — force-directed and radial-BFS algorithms render the graph at different levels of abstraction\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eSemantic tiered views\u003c/strong\u003e — zoom from architecture-level down to individual module dependencies\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eExport\u003c/strong\u003e — outputs to JSON, SVG, and image formats for documentation and diagramming tools\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"how-it-was-built\"\u003eHow It Was Built\u003c/h2\u003e\n\u003cp\u003eBuilt using an agent-driven development workflow with Claude Code and the Anthropic SDK. Most of the core graph logic and Three.js rendering pipeline was developed through iterative agent sessions, with human oversight on architecture decisions and API design.\u003c/p\u003e","title":"Diagram Builder"},{"content":"The Problem Basis Technologies ran shared long-lived integration environments. Engineers queued up, stepped on each other\u0026rsquo;s changes, and environment contention was a constant source of delays. Setting up a fresh environment could take days — sometimes weeks if infrastructure issues arose.\nThe Solution I designed and led the build of an on-demand full-stack environment system on AWS EKS. Each environment runs the full platform stack: application services, data warehouse, event streaming, and persistent storage.\nKey design decisions:\nNamespace-per-environment — complete isolation at the Kubernetes layer, no cross-contamination Pipeline-triggered provisioning — engineers request an environment through the CI/CD pipeline, not through manual ops work Automated teardown — environments are destroyed after use, keeping costs predictable Results Setup time: days-to-weeks → 3-4 hours Concurrent environments at peak: 50-60 Shared long-lived environments: fully retired Environment contention incidents: eliminated Scope This was a multi-quarter initiative. I handled architecture, implementation, documentation, and rollout — including two company-wide bootcamp sessions (Apr 2023, Dec 2024) to onboard the 30-engineer organization.\n","permalink":"http://brianmehrman.com/portfolio/kubernetes-test-environments/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eBasis Technologies ran shared long-lived integration environments. Engineers queued up, stepped on each other\u0026rsquo;s changes, and environment contention was a constant source of delays. Setting up a fresh environment could take days — sometimes weeks if infrastructure issues arose.\u003c/p\u003e\n\u003ch2 id=\"the-solution\"\u003eThe Solution\u003c/h2\u003e\n\u003cp\u003eI designed and led the build of an on-demand full-stack environment system on AWS EKS. Each environment runs the full platform stack: application services, data warehouse, event streaming, and persistent storage.\u003c/p\u003e","title":"On-Demand Kubernetes Test Environments"},{"content":"The Problem Engineering teams at Basis were building and maintaining their own CI/CD pipelines independently. There was no standard. Pipelines varied in quality, lacked consistent artifact management, and created ongoing maintenance burden for each team.\nThe Solution I designed and authored the company-wide CI/CD pipeline standard — a 14-stage pipeline that became the default for all engineering services.\nThe 14-stage pipeline includes:\nSource checkout Dependency installation Static analysis / linting Unit tests Integration test trigger Docker multi-stage build Image tagging (semantic versioning) Security scanning Artifact registry push Staging deploy Smoke tests Production deploy gate Production deploy Post-deploy verification Three deployment modes:\nLocked standard — teams inherit the full pipeline with no changes; zero maintenance overhead Configurable — teams can toggle stages on/off via config; still managed centrally Fully custom — teams own their pipeline; opt-in for services with unusual requirements Results Adopted across all engineering services Eliminated per-team pipeline maintenance for teams on locked standard Semantic versioning standardized across all artifacts Docker multi-stage builds reduced image sizes and improved security posture Legacy pipeline suite fully retired January 2025 ","permalink":"http://brianmehrman.com/portfolio/cicd-pipeline-standard/","summary":"\u003ch2 id=\"the-problem\"\u003eThe Problem\u003c/h2\u003e\n\u003cp\u003eEngineering teams at Basis were building and maintaining their own CI/CD pipelines independently. There was no standard. Pipelines varied in quality, lacked consistent artifact management, and created ongoing maintenance burden for each team.\u003c/p\u003e\n\u003ch2 id=\"the-solution\"\u003eThe Solution\u003c/h2\u003e\n\u003cp\u003eI designed and authored the company-wide CI/CD pipeline standard — a 14-stage pipeline that became the default for all engineering services.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eThe 14-stage pipeline includes:\u003c/strong\u003e\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eSource checkout\u003c/li\u003e\n\u003cli\u003eDependency installation\u003c/li\u003e\n\u003cli\u003eStatic analysis / linting\u003c/li\u003e\n\u003cli\u003eUnit tests\u003c/li\u003e\n\u003cli\u003eIntegration test trigger\u003c/li\u003e\n\u003cli\u003eDocker multi-stage build\u003c/li\u003e\n\u003cli\u003eImage tagging (semantic versioning)\u003c/li\u003e\n\u003cli\u003eSecurity scanning\u003c/li\u003e\n\u003cli\u003eArtifact registry push\u003c/li\u003e\n\u003cli\u003eStaging deploy\u003c/li\u003e\n\u003cli\u003eSmoke tests\u003c/li\u003e\n\u003cli\u003eProduction deploy gate\u003c/li\u003e\n\u003cli\u003eProduction deploy\u003c/li\u003e\n\u003cli\u003ePost-deploy verification\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003e\u003cstrong\u003eThree deployment modes:\u003c/strong\u003e\u003c/p\u003e","title":"Company-Wide CI/CD Pipeline Standard"},{"content":"Code Blocks Ruby code blocks are from the closure family\nRuby blocks are found throughout ruby. They are a powerful feature that allow you to pass snippets of code to enumerable methods (e.g. each, select, detect) as well as custom methods useing the yield keyword.\nBlocks are nothing new, they use the computer science concept called closures. This concept was invented by Peter J. Landin in 1964. Closures were adopted by a version of a Lisp called Scheme in 1975.\nWhat is a block Blocks are a concept that allows programmers to extend their applications to have a domain specific language (DSL). A DSL is a form of metaprogramming that makes it easier to program and configure complex systems.\nIf you have ever used a method on an Array to loop through elements in that Array you have used a block.\n[\u0026#34;alex\u0026#34;, \u0026#34;gina\u0026#34;, \u0026#34;pam\u0026#34;].each do |inserted_name| puts \u0026#34;Hello, #{inserted_name}\u0026#34; end A block starts with do and terminates with end. A block can have a parameter defined between the pipes |x| at the begin of a block definition. When defining a block on a single line use braces arr.each { |i| }.\nThis single line line block is often called an inline block.\n[\u0026#34;alex\u0026#34;, \u0026#34;gina\u0026#34;, \u0026#34;pam\u0026#34;].each { |inserted_name| puts \u0026#34;Hello, #{inserted_name}\u0026#34; } When writing a block that will need to be more than one line it do / end should be used. These are called multi-line blocks.\nMethods All ruby methods can be associated with a block. A block allows programmers to group statements together with a method call.\ndef something puts \u0026#34;do something\u0026#34; end If a the method does not call yield the block will not be called. The method will ignore the block being passed.\nsomething do puts \u0026#34;do something else\u0026#34; end Here the method something is being passed a block. However as seen below shows the output of the puts statement in the method is called and the statement in the block is not called.\ndo something To execute the block passed to method it will need to be yielded from within the method.\nYield An associated block can be invoked within the method using the keyword yield. This allows the programmer to control the flow of the block’s execution. Without calling yield the block is ignored, as seen above.\ndef something_else puts \u0026#34;Begin\u0026#34; yield puts \u0026#34;End\u0026#34; end The contents of the block will be executed when the yield statement is called. Once the block is yielded the method is returned control of execution.\nsomething_else do puts \u0026#34;yielded code\u0026#34; end The method executing the block can run any code before or after the block is invoked.\nBegin yielded code End When something_else is called with a block the method prints out the string Begin. The block is then invoked, printing out yielded code. The method is then returned control printing out the final string End.\nPassing Arguments Yield blocks can be passed arguments within a method. These arguments can then be used within the blocks as a parameter.\ndef run_three [1,2,3].each do |i| yield(i) end end The parameter will be defined between the pipes |x|. The arity of the parameters should match the number of arguments being passed into the yield statement.\nrun_three do |i| puts \u0026#34;and a #{i}\u0026#34; end The output:\nand a 1 and a 2 and a 3 Above the we can see that the method run_three is being envoked with a block that accepts an argument. Within the run_three method we have an array of 3 numbers [1,2,3]. The method iterates through the array and calls yield with each value in the array.\nBlocks as Arguments A block can be passed around as an argument to a method. This can be done by adding an argument to the method with an ampersand \u0026amp;block.\ndef pass_method(\u0026amp;block) block.call end Invoking the method call on the block will execute the block. When the passed block is stored as a variable it is changed to a Proc.\npass_method { puts \u0026#34;yielded code\u0026#34; } The block can still have params passed to it, like a block that is yielded.\ndef my_iterator(things, \u0026amp;block) things.each do |thing| block.call(thing) end end van = [] my_iterator [\u0026#39;John\u0026#39;, \u0026#39;Paul\u0026#39;, \u0026#39;George\u0026#39;, \u0026#39;Ringo\u0026#39;] do |beatle| van \u0026lt;\u0026lt; beatle end puts \u0026#34;#{van.join(\u0026#39;, \u0026#39;)} are in the van\u0026#34; A block is not execute at the time it is encountered. The current context is stored with the block for later use before entering the method. This mean local variables called in the block will have the same value a when the block was passed to the method call.\nConclusion A block can be used to create some commonly used code that allows you to inject custom code where it is used. This comes in handy when you need to dry up your code but find that the code differs in a few key places.\n","permalink":"http://brianmehrman.com/blog/ruby-code-blocks/","summary":"\u003ch1 id=\"code-blocks\"\u003eCode Blocks\u003c/h1\u003e\n\u003cp\u003eRuby code blocks are from the closure family\u003c/p\u003e\n\u003cp\u003eRuby blocks are found throughout ruby. They are a powerful feature that allow\nyou to pass snippets of code to enumerable methods (e.g. each, select, detect)\nas well as custom methods useing the yield keyword.\u003c/p\u003e\n\u003cp\u003eBlocks are nothing new, they use the computer science concept called closures.\nThis concept was invented by Peter J. Landin in 1964. Closures were adopted by\na version of a Lisp called Scheme in 1975.\u003c/p\u003e","title":"Ruby Code Blocks"},{"content":"Database Lock Modes Postgresql provides different locking modes to control concurrency within the database. Locks are automatically acquired by most Postgres commands to make sure tables are not dropped or modified while a command is executed. Locks can also be acquired manually for application level control.\nActiveRecord provides the interface between the Rails application and the database. Using ActiveRecord we can create these database level locks.\nSetup For the examples in this post we will be using the following data setup with a new Rails application.\n# Create the tables class CreatePeople \u0026lt; ActiveRecord::Migration def change create_table :people do |t| t.string :name t.timestamps end end end class CreateToys \u0026lt; ActiveRecord::Migration def change create_table :toys do |t| t.string :name t.references :person, foreign_key: true t.timestamps end end end # Add Models class Toy \u0026lt; ApplicationRecord belongs_to :person, required: false end class Toy \u0026lt; ApplicationRecord belongs_to :person, required: false end # Generate some data people = %w(Dave Bob Alice Ace Bee Chip) people.each { |name| Person.find_or_create_by({ name: name }) } toys = %w(rocket plane ship yo-yo bike boat truck blocks house doll racecar thermonuclearbomb) toys.each { |toy_name| Toy.find_or_create_by({ name: toy_name }) } Table Level Locks This table shows the which locks on a table (columns) will block a requested lock (rows).\nRequested Lock Mode ACCESS SHARE ROW SHARE ROW EXCLUSIVE SHARE UPDATE EXCLUSIVE SHARE SHARE ROW EXCLUSIVE EXCLUSIVE ACCESS EXCLUSIVE ACCESS SHARE X ROW SHARE X X ROW EXCLUSIVE X X X X SHARE UPDATE EXCLUSIVE X X X X X SHARE X X X X X SHARE ROW EXCLUSIVE X X X X X X EXCLUSIVE X X X X X X X ACCESS EXCLUSIVE X X X X X X X X Access Share AccessShareLock Access share is the most common lock and is only blocked by an ACCESS EXCLUSIVE.\nA simple select on any table will acquire the ACCESS SHARE lock. Any table references for a query can create a lock on that referenced table.\nToy.find_by(name: \u0026#39;racecar\u0026#39;) The find_by method on an ActiveRecord object is an easy way to acquire the ACCESS SHARE lock. These locks only there while the query is running. A normal selection of a single row will not block the selection of the same row by another connection.\nShare Row Exclusive ShareRowExclusiveLock The SHARE ROW EXCLUSIVE is a lock that is not automatically acquired by any of the PostgreSQL commands. This lock is created by the user or the application that is connected to PostgreSQL.\ntoy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.lock! toy.save! In Rails you can acquire this lock by using the lock! method. This method is used by developers who want to use a Pessimistic locking strategy.\nAccess Exclusive AccessExclusiveLock The ACCESS EXCLUSIVE lock will block all other locks from being acquired.\nThis lock is acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, VACUUM FULL, and REFRESH MATERIALIZED VIEW. The command ALTER TABLE can also acquire an ACCESS EXCLUSIVE.\nActiveRecord::Migration.add_column :toys, :counter, :integer Adding a column to a table will acquire an ACCESS EXCLUSIVE lock.\nFor more information on the other database level locks and what commands create these locks please check out the PostgreSQL documentation here.\nDeadlock A deadlock is where 2 or more transactions attempt to acquire a lock the other holds. When this occurs neither process can proceed. When this occurs PostgreSQL will resolve the dead lock by aborting one of the transactions involved.\nExample Transaction A sends a request to update the user on several toys. This requires locks to be acquired on the Toys table.\n# Transaction A: Bob picks up 5 toys ActiveRecord::Base.transaction do bob = Person.find_by(name: \u0026#39;Bob\u0026#39;) names = %w(bike yo-yo racecar truck rocket) names.each do |toy_name| toy = Toy.find_by(name: toy_name) toy.lock! toy.person = bob sleep(5) toy.save end end Transaction B sends a request to update the user on several toys as well. This list includes a couple of the same toys the first transaction is going to update.\n# Transaction B: Alice picks up 5 toys. ActiveRecord::Base.transaction do alice = Person.find_by(name: \u0026#39;Alice\u0026#39;) names = %w(rocket plane ship doll bike) names.each do |toy_name| toy = Toy.find_by(name: toy_name) toy.lock! toy.person = alice sleep(5) toy.save end end If both of these transactions are run at the same time you will see a dead lock occur when transaction A tries to acquire a lock on the same record transaction B has already acquired a lock for.\n(1.1ms) BEGIN Person Load (0.8ms) SELECT \u0026#34;people\u0026#34;.* FROM \u0026#34;people\u0026#34; WHERE \u0026#34;people\u0026#34;.\u0026#34;name\u0026#34; = $1 LIMIT $2 [[\u0026#34;name\u0026#34;, \u0026#34;Alice\u0026#34;], [\u0026#34;LIMIT\u0026#34;, 1]] Toy Load (0.8ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;name\u0026#34; = $1 LIMIT $2 [[\u0026#34;name\u0026#34;, \u0026#34;rocket\u0026#34;], [\u0026#34;LIMIT\u0026#34;, 1]] Toy Load (0.7ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE [[\u0026#34;id\u0026#34;, 1], [\u0026#34;LIMIT\u0026#34;, 1]] Toy Load (1.4ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;name\u0026#34; = $1 LIMIT $2 [[\u0026#34;name\u0026#34;, \u0026#34;bike\u0026#34;], [\u0026#34;LIMIT\u0026#34;, 1]] Toy Load (1002.7ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE [[\u0026#34;id\u0026#34;, 5], [\u0026#34;LIMIT\u0026#34;, 1]] (1.1ms) ROLLBACK ActiveRecord::Deadlocked: PG::TRDeadlockDetected: ERROR: deadlock detected DETAIL: Process 1976 waits for ShareLock on transaction 82071; blocked by process 1978. Process 1978 waits for ShareLock on transaction 82070; blocked by process 1976. HINT: See server log for query details. CONTEXT: while locking tuple (0,29) in relation \u0026#34;toys\u0026#34; : SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE from (irb):75:in `block (2 levels) in irb_binding\u0026#39; from (irb):73:in `each\u0026#39; from (irb):73:in `block in irb_binding\u0026#39; from (irb):69 Displaying Locks The locks acquired in PostgreSQL can be viewed using the psql client to query the PostgreSQL server.\nExample For this example we will need to open three connections to our rails console. To see these locks in action we will need to first add a lock to the database that will block all subsequent queries.\nIn one of the rails consoles we will be creating a lock on the Toys table using the following command\nActiveRecord::Base.transaction do toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.lock! sleep(3600) # wait for an hour toy.save! end This will create an ExclusiveLock. This will not block an AccessShareLock, but it will block an update. We will next try to update the same record from our second rails console.\ntoy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.update(person_id: 1) This update will attempt to acquire a ShareLock on the toys table. The ExclusiveLock will block the update. You can view this lock by quering the pg_catalog.pg_locks and the pg_catalog.pg_stat_activity.\nUsing this SQL query below we can see which statement is blocking another.\nSELECT blocked_locks.pid AS blocked_pid, blocked_activity.usename AS blocked_user, blocking_locks.pid AS blocking_pid, blocking_activity.usename AS blocking_user, blocked_activity.application_name AS blocked_application, blocking_activity.application_name AS blocking_application, blocked_activity.query AS blocked_statement, blocking_activity.query AS current_statement_in_blocking_process, blocked_locks.mode AS blocked_mode, blocking_locks.mode AS blocking_mode FROM pg_catalog.pg_locks blocked_locks JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid JOIN pg_catalog.pg_locks blocking_locks ON blocking_locks.locktype = blocked_locks.locktype AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid AND blocking_locks.pid != blocked_locks.pid JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid WHERE NOT blocked_locks.GRANTED; This query should show you something that looks like this.\nblocked_pid | blocked_user | blocking_pid | blocking_user | blocked_statement | current_statement_in_blocking_process | blocked_application | blocking_application | blocked_mode | blocking_mode -------------+--------------+--------------+---------------+---------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------+---------------------+----------------------+---------------------+--------------------- 2210 | postgres | 2436 | postgres | UPDATE \u0026#34;toys\u0026#34; SET \u0026#34;person_id\u0026#34; = $1, \u0026#34;updated_at\u0026#34; = $2 WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $3 | SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE | rails_console | rails_console | ShareLock | ExclusiveLock Next we will create an ACCESS EXCLUSIVE to view how this lock can block all other queries on a table.\nActiveRecord::Base.transaction do ActiveRecord::Migration.add_column :toys, :countz, :integer sleep(3600) # wait for an hour end This migration creates an ALTER TABLE command that will acquire an ACCESS EXCLUSIVE lock on the toys table.\nThis lock will block any other attempts to acquire a lock on the toys table. Allowing us to queue up queries that we will see in the pg_locks.\nBelow are examples of other locks you can create.\nRow Share Lock Toy.find_by_sql(\u0026#34;SELECT * FROM toys WHERE name = \u0026#39;racecar\u0026#39; FOR UPDATE\u0026#34;) Row Exclusive Lock toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.update(person_id: 1) toy Share ActiveRecord::Migration.add_index(:toys, :name) Summary Locks exist to protect data integrity between concurrent transactions. They allow us ensure that a long running query or update will not be corrupted by a conflicting change to the table(s) you are using.\nWhile reading from a table will not block other reads you must be mindful of when you are planning on running migrations that alter table that are read by multiple users at once. Specially if those tables require a user to look a multiple records in one query. An ALTER TABLE can easily block a user from reading a record.\n","permalink":"http://brianmehrman.com/blog/database-record-locking/","summary":"\u003ch1 id=\"database-lock-modes\"\u003eDatabase Lock Modes\u003c/h1\u003e\n\u003cp\u003ePostgresql provides different locking modes to control concurrency within the database.\nLocks are automatically acquired by most Postgres commands to make sure tables are not\ndropped or modified while a command is executed. Locks can also be acquired manually\nfor application level control.\u003c/p\u003e\n\u003cp\u003eActiveRecord provides the interface between the Rails application and the database.\nUsing ActiveRecord we can create these database level locks.\u003c/p\u003e\n\u003ch2 id=\"setup\"\u003eSetup\u003c/h2\u003e\n\u003cp\u003eFor the examples in this post we will be using the following data setup with a new Rails application.\u003c/p\u003e","title":"Postgresql Record Locking"},{"content":" Data integrity is at risk once two sessions begin to work on the same records…\nMartin Fowler Optimistic Locking Is a data access strategy where it is assumed that the chance of conflict between sessions is very low. Changes from one session are validated before being committed to the database.\nPessimistic Locking This an opposing data access strategy assumes that conflicts between sessions is highly likely. Conflicts are prevented by forcing all transactions to obtain a lock on the data (record or table) before it can start to use it.\nRuby on Rails Rails has optimistic and pessimistic locking built into the ActiveRecord gem.\nUsing Optimistic To enable optimistic locking you only need to add a lock_version column to your table schema.\nt.integer \u0026#34;lock_version\u0026#34;, default: 0 You may also specify a different column like:\nclass Toy \u0026lt; ActiveRecord::Base self.locking_column = :lock_toy end Rails uses this column to keep two sessions from updating the same record at the same time. The lock_version is incremented every time the record is saved. Whenever the record is updated the correct lock version must be used when saved. If the wrong lock_version is provided an error will be thrown.\nWhen two sessions are running at the same time you can see the problem with optimistic locking.\n# Alice’s session alice:001:0\u0026gt; alice = Person.find_by(name: ’Alice’) alice:002:0\u0026gt; toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) alice:003:0\u0026gt; toy.person_id = alice.id alice:004:0\u0026gt; toy.save! true Bob’s session bob:001:0\u0026gt; bob= Person.find_by(name: Bob’) bob:002:0\u0026gt; toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) bob:003:0\u0026gt; toy.person_id = bob bob:004:0\u0026gt; toy.save! ActiveRecord::StaleObjectError: Attempted to update a stale object: Toy. from bob:4 Using Pessimistic With pessimistic locking there is no ‘enabling’ needed. Calling lock with a find query is all that is needed to use pessimistic locking in Rails.\nToy.lock.find_by(name: \u0026#39;racecar\u0026#39;) The above code will lock the racecar Toy until the end of the transaction. Calling lock outside of a transaction will only lock the record for the single call.\nwith_lock There is a way to begin a transaction and lock a record in a single call. The method with_lock will wrap a provided block within a transaction locking the model on which it was called.\nThe examples below are for two pieces of code will will run at the same time in two different sessions.\nsession Alice*\nalice = Person.find_by(name: \u0026#39;Alice\u0026#39;) toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.with_lock do toy.person_id = alice.id toy.save! end session Bob\nbob = Person.find_by(name: \u0026#39;Bob\u0026#39;) toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.with_lock do toy.person_id = bob.id toy.save! end One of these sessions will get the lock before the other. For this example lets say that Alice got the lock first.\n(Alice):004:0\u0026gt; toy.with_lock do (Alice):005:1* toy.person_id = alice.id (Alice):006:1\u0026gt; sleep(200) (Alice):007:1\u0026gt; toy.save! (Alice):008:1\u0026gt; end (1.3ms) BEGIN Toy Load (1.2ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE [[\u0026#34;id\u0026#34;, 8], [\u0026#34;LIMIT\u0026#34;, 1]] SQL (3.3ms) UPDATE \u0026#34;toys\u0026#34; SET \u0026#34;person_id\u0026#34; = 3, \u0026#34;updated_at\u0026#34; = \u0026#39;2018-07-01 20:42:56.401262\u0026#39;, \u0026#34;lock_version\u0026#34; = 11 WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 AND \u0026#34;toys\u0026#34;.\u0026#34;lock_version\u0026#34; = $2 [[\u0026#34;id\u0026#34;, 8], [\u0026#34;lock_version\u0026#34;, 10]] (1.2ms) COMMIT Notice the 200 second sleep. This will allow us to see what would happen with a long running process\n(Bob):006:0\u0026gt; toy.with_lock do (Bob):007:1* toy.person_id = bob.id (Bob):008:1\u0026gt; toy.save! (Bob):009:1\u0026gt; end (1.2ms) BEGIN Toy Load (198875.2ms) SELECT \u0026#34;toys\u0026#34;.* FROM \u0026#34;toys\u0026#34; WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 LIMIT $2 FOR UPDATE [[\u0026#34;id\u0026#34;, 8], [\u0026#34;LIMIT\u0026#34;, 1]] SQL (0.9ms) UPDATE \u0026#34;toys\u0026#34; SET \u0026#34;person_id\u0026#34; = 2, \u0026#34;updated_at\u0026#34; = \u0026#39;2018-07-01 20:42:56.410077\u0026#39;, \u0026#34;lock_version\u0026#34; = 12 WHERE \u0026#34;toys\u0026#34;.\u0026#34;id\u0026#34; = $1 AND \u0026#34;toys\u0026#34;.\u0026#34;lock_version\u0026#34; = $2 [[\u0026#34;id\u0026#34;, 8], [\u0026#34;lock_version\u0026#34;, 11]] (1.1ms) COMMIT We can see the affect of the lock on the Bob session. If you look at how long the Toy Load took, 198875.2ms. All of that time is the application waiting on the Alice session.\nlock! An alternative to with_lock is lock!. This will lock the record’s row for the duration of the transaction.\nActiveRecord::Base.transaction do alice = Person.find_by(name: \u0026#39;Alice\u0026#39;) toy = Toy.find_by(name: \u0026#39;racecar\u0026#39;) toy.lock! toy.person_id = alice.id toy.save! end Conclusion Optimistic and Pessimistic locking each have their own purpose with benefits and problems.\nOptimistic locking is fine to have on enabled on most user managed models. It can be used to keep two users from updating the same record at the same time or from outdated data. Provided that the user form has the lock_version passed as part of the posted data.\nPessimistic locking provides a way to use the database layer to determine if someone is using that data. This can be easy for updating a single table record, but becomes most difficult the more records you try to lock. You can use pessimistic locking to make the users wait to update their shared data or to notify the users that a person is updating a record or set of records.\nI would be careful with pessimistic locking. This can lead to database deadlocks.\n","permalink":"http://brianmehrman.com/blog/optimistic-vs-pessimistic-locking/","summary":"\u003cblockquote\u003e\n\u003cp\u003eData integrity is at risk once two sessions begin to work on the same records…\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cul\u003e\n\u003cli\u003eMartin Fowler\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch1 id=\"optimistic-locking\"\u003eOptimistic Locking\u003c/h1\u003e\n\u003cp\u003eIs a data access strategy where it is assumed that the chance of conflict between sessions is very low. Changes from one session are validated before being committed to the database.\u003c/p\u003e\n\u003ch1 id=\"pessimistic-locking\"\u003ePessimistic Locking\u003c/h1\u003e\n\u003cp\u003eThis an opposing data access strategy assumes that conflicts between sessions is highly likely. Conflicts are prevented by forcing all transactions to obtain a lock on the data (record or table) before it can start to use it.\u003c/p\u003e","title":"Optimistic vs Pessimistic Locking"},{"content":"Git Git is a free version control system designed for both small and large projects.\nTips Rebase Whenever I am working on a project with other developers I find that I need to rebase my feature branch with the common branch the rest of the team is using. This branch is usually called dev or ci.\n$ git fetch To make sure I have the latest branches I run a fetch to update all my local branches.\n$ git rebase -i origin/dev my_branch Rebase can work on local branches as well as remote branches. In the above example I rebased my branch on the remote version of dev. The -i flag stands for interactive. This allows me to check the commits that I am rebasing onto the dev branch.\nReset HEAD $ git reset HEAD This command will reset my current branch to whatever the latest commit is. This is a great way to clean up a branch and start from where you last committed.\n$ git reset HEAD~ The ~ (tilde) character next to HEAD tells git to go back 1 commit. I use this command to remove bad commits. Resetting the commit will unstage the files from the previous commit.\nDelete Branches git branch -d my_branch This will delete the branch named my_branch, only if the branch has been merged.\ngit branch -D my_branch This will delete the branch even if the branch has not been merged.\ngit branch --list Lists out all the branches that are local to your machine.\ngit branch -D $(git branch --list hot*) This deletes all branches prefixed with hot\n","permalink":"http://brianmehrman.com/blog/git-tips/","summary":"\u003ch1 id=\"git\"\u003eGit\u003c/h1\u003e\n\u003cp\u003eGit is a free version control system designed for both small and large projects.\u003c/p\u003e\n\u003ch1 id=\"tips\"\u003eTips\u003c/h1\u003e\n\u003ch2 id=\"rebase\"\u003eRebase\u003c/h2\u003e\n\u003cp\u003eWhenever I am working on a project with other developers I find that I need to rebase\nmy feature branch with the common branch the rest of the team is using. This branch\nis usually called \u003ccode\u003edev\u003c/code\u003e or \u003ccode\u003eci\u003c/code\u003e.\u003c/p\u003e\n\u003cpre tabindex=\"0\"\u003e\u003ccode\u003e$ git fetch\n\u003c/code\u003e\u003c/pre\u003e\u003cp\u003eTo make sure I have the latest branches I run a \u003ccode\u003efetch\u003c/code\u003e to update all my local branches.\u003c/p\u003e","title":"Git Tips"},{"content":"Meta Programming Having to write redundant methods can lead to hundreds of lines of code that can be hard to maintain. Meta programming provides us with tools that allow us to write code that writes its self.\nClass Eval class_eval opens the class and adds the method to the class itself. While this takes longer to define a method it puts the new method in the ancestor chain where it can be accessed quicker than if you used define_method.\nclass Test; end Test.class_eval(\u0026#34;def new_method() 1+1 end\u0026#34;) t = Test.new t.new_method # =\u0026gt; 2 Define Method define_method will define an instance method on the receiver using the block or Proc provided. While faster to create the method it has a little overhead that will slow the methods execution down. While this performance hit is relatively small it has an accumulative effect that can slow down code if the method is called multiple times in a row.\nclass Test; end Test.define_method(:new_method) do 1 + 1 end # or using a proc a_proc = Proc.new { 1 + 1 } Test.define_method(:another_method, a_proc) t = Test.new t.new_method # =\u0026gt; 2 t.another_method # =\u0026gt; 2 Instruction Sequence When Ruby compiles the code it will create instructions that YARV. These instructions will differ between different that different ways you can define a method even if the code in that method are basically the same.\n# Defines the methods normally module Foo def my_method \u0026#39;module method\u0026#39; end end # Defines the method using class eval module ClassFoo class_eval \u0026lt;\u0026lt;-RUBY def my_method \u0026#39;Class evaluated method\u0026#39; end RUBY end # Defines the methods using define_method module DefineFoo define_method \u0026#39;my_method\u0026#39; do \u0026#39;Define method method\u0026#39; end end # Base Class class MyClass def my_method \u0026#39;class method\u0026#39; end end # Subclass normal methods class MySubclass \u0026lt; MyClass include Foo end # class using class eval methods class CEClass \u0026lt; MyClass include ClassFoo end # Class using define method methods class DMClass \u0026lt; MyClass include DefineFoo end class_evaluated = CEClass.new dynamiclly_defined = DMClass.new normally_defined = MySubclass.new Above we have three modules that each implement the same method in different ways; def, class_eval, and define_method. Each module has been included in its own class.\nWith the help of RubyVM’s RubyVM::InstructionSequence we can get a human readable string of instruction sequences for a specific method. Using the disassemble method we will see each instruction.\nUsing def irb(main):023:0\u0026gt; puts RubyVM::InstructionSequence.disasm(normally_defined.method(:my_method)) == disasm: \u0026lt;rubyvm::instructionsequence:my_method\u0026gt; 0000 trace 8 ( 20) 0002 trace 1 ( 21) 0004 putstring \u0026#34;module method\u0026#34; 0006 trace 16 ( 22) 0008 leave ( 21) \u0026lt;/rubyvm::instructionsequence:my_method\u0026gt; Using class_eval irb(main):019:0\u0026gt; puts RubyVM::InstructionSequence.disasm(class_evaluated.method(:my_method)) == disasm: \u0026lt;rubyvm::instructionsequence:my_method\u0026gt;=============== 0000 trace 8 ( 1) 0002 trace 1 ( 2) 0004 putstring \u0026#34;Class evaluated method\u0026#34; 0006 trace 16 ( 3) 0008 leave ( 2) \u0026lt;/rubyvm::instructionsequence:my_method\u0026gt; Using define_method irb(main):020:0\u0026gt; puts RubyVM::InstructionSequence.disasm(dynamiclly_evaluated.method(:my_method)) == disasm: \u0026lt;rubyvm::instructionsequence:block in=\u0026#34;\u0026#34;\u0026gt;\u0026lt;module:definefoo\u0026gt;@/Users/brian.mehrman/ruby_projects/lightning_talks/class_eval-vs-define_method/dynamic_methods.rb\u0026gt; == catch table | catch type: redo st: 0002 ed: 0006 sp: 0000 cont: 0002 | catch type: next st: 0002 ed: 0006 sp: 0000 cont: 0006 |------------------------------------------------------------------------ 0000 trace 256 ( 13) 0002 trace 1 ( 14) 0004 putstring \u0026#34;Define method method\u0026#34; 0006 trace 512 ( 15) 0008 leave ( 14) \u0026lt;/module:definefoo\u0026gt;\u0026lt;/rubyvm::instructionsequence:block\u0026gt; At a quick glance we can see a major difference between the instructions for the method created using define_method. This extra step helps to slow down the methods execution. What is this extra step from, one word ‘closures’.\nClosures Blocks, Procs, and Lambdas are types of closures that are used everyday. A closure simply put is a self contained section of code that can be passed around that you can execute at a later point in your code.\narr = [1,2,3] # Block def log(\u0026amp;block) block.call end log do puts arr.first end # Proc log = Proc.new { |arg| puts arg } log.call(arr.first) # Lambdas log = lambda { |arg| puts arg } log.call(arr.first) These closures can be great to store code that you need to execute later, however executing that code later comes at a performance cost.\nIn the case of define_method, the block passed to define_method is stored for use later by instance_eval. It is not clear exactly when the instance_eval occurs, whether it is when the method is defined or when it is executed is not entirely clear.\nThe class_eval instantiates a new parser and compiles the source. Each method definition in the class_eval version does not share instruction sequences, where the define_method version does.\nConclusion While define_method is quick to create your method it is slower to execute. It also is placed higher up the ancestor chain, slowing the method lookup as well. Define method is great when you need to create a method that will be used in low volume. Using class_eval is slower to define the method, yet it provides the benefit of executing faster than if it were defined using define_method. The methods defined using class_eval also are placed lower in the ancestor chain.\nMeta-programming is a great way to dry up your code and create code that can essentially write itself. This benefit comes at a cost, either at compile time or at run time. Knowing what that cost is can help in deciding what approach you show take.\n","permalink":"http://brianmehrman.com/blog/class-eval-vs-define-method/","summary":"\u003ch1 id=\"meta-programming\"\u003eMeta Programming\u003c/h1\u003e\n\u003cp\u003eHaving to write redundant methods can lead to hundreds of lines of code that can\nbe hard to maintain. Meta programming provides us with tools that allow us to write\ncode that writes its self.\u003c/p\u003e\n\u003ch1 id=\"class-eval\"\u003eClass Eval\u003c/h1\u003e\n\u003cp\u003eclass_eval opens the class and adds the method to the class itself. While this takes\nlonger to define a method it puts the new method in the ancestor chain where it can\nbe accessed quicker than if you used define_method.\u003c/p\u003e","title":"class_eval vs define_method"},{"content":"I\u0026rsquo;m a Principal Software Engineer based in the Greater Chicago area, focused on platform infrastructure and developer tooling that lets engineering teams move fast without breaking things — building cloud-native systems for multi-team engineering organizations.\nBackground My path into software was unconventional. I started as a Technical Artist at Volition / THQ in Champaign, IL — building Python tools, pipeline automation, and camera shaders for Saints Row 2 and Red Faction: Guerrilla. That early focus on building tools for other creators still shapes how I approach platform work today — I think of engineers as my primary users.\nAfter game dev, I moved into web development and spent a decade growing through the stack — from feature work on ad operations platforms to leading platform infrastructure for a 30-engineer organization.\nWhat I Do I work at the intersection of infrastructure, developer experience, and architecture. Most recently at Basis Technologies, I:\nDesigned and led an on-demand full-stack test environment system on AWS EKS — reduced setup from days-to-weeks down to 3–4 hours, scaled to 50–60 concurrent environments, and enabled teams to run parallel feature and integration testing without blocking shared environments Authored the company-wide CI/CD pipeline standard adopted across engineering — a 14-stage pipeline with Docker multi-stage builds, semantic versioning, and three deployment modes, replacing ad-hoc pipeline configurations across the organization Built and maintained the Kubernetes platform that runs product services, data pipelines, and developer tooling for the full engineering organization How I Work I believe the job of platform engineering is to disappear — to build systems so reliable and easy to use that product teams stop thinking about infrastructure and start thinking about problems.\nI care about documentation, naming things well, and making systems legible to the engineers who depend on them. I\u0026rsquo;ve led bootcamps, written internal standards, and mentored engineers because I think building good teams is as important as building good systems.\nI work closely with product and team leads to make sure platform investments line up with real bottlenecks in delivery.\nI\u0026rsquo;m drawn to problems at the edge of \u0026ldquo;this is how we\u0026rsquo;ve always done it.\u0026rdquo; Those are the places where questioning the constraint leads to a 10x improvement rather than an incremental one.\nOutside Work I\u0026rsquo;ve been exploring AI/LLM agent workflows as a serious development tool. My open-source Diagram Builder — a tool that turns TypeScript codebases into interactive 3D dependency graphs using React, Three.js, and Neo4j — was built primarily through agent-driven development with Claude and the Anthropic SDK.\nGet in Touch GitHub — code and open-source projects LinkedIn — work history and recommendations bcmehrman@gmail.com ","permalink":"http://brianmehrman.com/about/","summary":"About Brian Mehrman — Platform Engineer, Architect, Mentor","title":"About"},{"content":" bcmehrman@gmail.com · GitHub · LinkedIn · Greater Chicago Area\nSummary Principal Software Engineer focused on platform infrastructure. I design and ship the systems that let engineering teams move fast. Most recently, I built an on-demand Kubernetes test environment system that cut setup from days or weeks down to 3-4 hours and scaled to 50-60 concurrent environments for a 30-engineer organization, replacing shared long-lived integration environments. I also authored the company-wide CI/CD pipeline standard adopted across engineering. 15+ years across platform engineering, backend architecture, and developer tooling — with active practice in AI/LLM agent workflows.\nExperience Basis Technologies — Chicago, IL Formerly Centro; rebranded October 2021.\nPrincipal Software Engineer | Apr 2023–Present\nArchitect and own platform infrastructure for the engineering organization (~30 engineers across product and platform teams). Serve as Developer Liaison coordinating technical dependencies between product and platform groups.\nDesigned and led an on-demand full-stack integration environment system on AWS EKS — reduced test environment setup from days or weeks to 3-4 hours; scaled to 50-60 concurrent environments, replacing shared long-lived integration environments Authored the company-wide CI/CD pipeline standard: 14-stage pipeline, Docker multi-stage builds, semantic versioning, three deployment modes (locked standard, configurable, fully custom) Built long-lived Sales demo environment on Kubernetes with daily synthetic delivery data generation, weekly automated rebuild from snapshot, and on-demand org/user provisioning via pipeline Managed the full transition from the legacy environment pipeline to the new E2E pipeline suite — legacy pipeline retired Jan 2025 Designed namespace-per-service Kubernetes architecture for multi-application environments Led two company-wide bootcamp sessions on Kubernetes environment usage (Apr 2023, Dec 2024) Technologies: AWS EKS, Kubernetes, Helm, Docker, Kafka, Snowflake, Datadog, Python, Ruby on Rails\nStaff Software Engineer | Oct 2021–Apr 2023\nLed migration of all data pipeline workloads to Kubernetes (5+ pipeline types: data ingestion, dimensional modeling, batch aggregation, real-time event processing, Hadoop jobs) Drove company-wide CI/CD platform adoption — built first standardized pipelines for core API and analytics services Designed and built shared integration environment system with cloud data warehouse, event streaming, and persistent storage setup Completed Hadoop NameNode High Availability migration on Kubernetes Added M1 developer machine support Technologies: AWS EKS, Kubernetes, Helm, Docker, Hadoop, Kafka, Snowflake, Python, Ruby\nCentro — Chicago, IL Staff Software Engineer | Oct 2020–Oct 2021\nMigrated analytics data backend from monolith to dedicated analytics microservice Built Kubernetes-based local development environment tooling adopted across engineering Set up the initial Kubernetes environment for the core platform application Built Dockerized synthetic delivery data generation service for integration testing Authored technical evaluation for real-time campaign collaboration: WebSocket vs. polling analysis Technologies: AWS, Kubernetes, Docker, Python, Ruby on Rails, Kafka, PostgreSQL, Redis\nLead Software Engineer | May 2019–Oct 2020\nLed a three-milestone campaign performance dashboard initiative — replaced fragmented tables with actionable charts (pacing %, KPI %, agency margin %); reduced time to identify at-risk campaigns Designed real-time campaign collaboration system with WebSocket vs. polling evaluation Led product analytics GDPR compliance integration Owned engineering interview process and take-home exercises for QA and frontend roles Technologies: AWS, Ruby on Rails, Redis, PostgreSQL, JavaScript, React, Docker\nSenior Software Engineer | Aug 2017–May 2019\nDesigned and implemented backend for a queued bulk-edit system — async Sidekiq Batch processing with live progress polling, per-item receipts, and concurrency-limited job execution Refactored and migrated two overlapping data models into one unified model Implemented data rollback for database deadlock scenarios in an optimistic locking application Mentored engineers on Ruby and Rails architecture and design patterns Technologies: AWS, Ruby on Rails, Redis, PostgreSQL, React, Node.js, Docker, Java\nSoftware Engineer | Jul 2015–Aug 2017\nRefactored monolithic Campaign Overview endpoint into four parallel view-specific endpoints, enabling progressive page rendering Built a data generation utility for manual testing and a manual upload feature that ingests spreadsheet data into the interactive dashboard Technologies: AWS, Ruby on Rails, Redis, PostgreSQL, React, Node.js, Docker\nTukaiz, LLC — Franklin Park, IL Software Development Manager | May 2014–Jul 2015\nDeveloped multi-tenant application using LocomotiveCMS with back-end services and third-party email marketing integration Configured AWS infrastructure defenses against DOS attacks Established documentation culture that shortened developer onboarding Technologies: AWS, Ruby on Rails, PHP, MySQL, Redis, MongoDB, PostgreSQL\nApplication Development Specialist | Dec 2011–May 2014\nDesigned and developed a document storage system for large file uploads delivered via CDN Created signage placement guide generator (SVG-to-interactive-guide) Technologies: Ruby on Rails, PHP, MySQL, PostgreSQL, JavaScript\nVolition / THQ — Champaign, IL Technical Artist | Jun 2007–Jan 2011\nPipeline and tooling work for AAA game development. Game credits: Saints Row 2, Red Faction: Guerrilla.\nBuilt Python-based XML and light editors with live in-game feedback Designed camera-based parallax shader solution for in-game windows Developed workflow for rapid city layout prototyping using grammar-based generation software Technologies: Python, MaxScript, C++, C#, HLSL\nProjects Diagram Builder | 2026 | TypeScript, Node, React, Three.js, Neo4j\nOpen-source code visualization platform — parses repositories into a dependency graph, renders 2D and 3D layouts (force-directed, radial-BFS), supports semantic tiered views, and exports to multiple formats. Built with agent-driven development workflows using Claude Code and the Anthropic SDK.\nSkills Area Technologies Languages Ruby, JavaScript, TypeScript, Python, Java, SQL Frameworks Ruby on Rails, Node.js, React, Sinatra Infrastructure AWS EKS, Kubernetes, Helm, Docker, Kafka, Snowflake, Redis, PostgreSQL, MySQL, Datadog CI/CD Pipeline design and standards, Docker multi-stage builds, semantic versioning, Harness AI/LLM Claude Agent SDK, Anthropic API, Claude Code, agent workflow design Practices Platform architecture, distributed systems, developer tooling, DX, observability, technical mentorship Education Savannah College of Art and Design — Savannah, GA Bachelor of Fine Arts, Interactive Design and Game Development | 2002–2006\nCode Academy (now the Starter League) — Chicago, IL Ruby on Rails Development | 2011\n","permalink":"http://brianmehrman.com/resume/","summary":"Resume — Brian Mehrman, Principal Software Engineer","title":"Resume"}]