I’ve talked a lot about these two terms before in the context of software development: radial and sequential. Usually, I frame these two words in the world of unit testing. A radial app has much more utility from unit tests compared to sequential apps. This doesn’t obliterate the value of unit tests, but when you’re a project leader, the decision to focus on unit tests or not is an important one. As theory-crafting engineers, it’s impossible for us to say “no, don’t unit test.” But as practical builders, it’s difficult to weigh the value of unit tests against more respectable measures of success, like agile sprint velocity, customer happiness, sales, etc.
Yeah, I know — “but unit tests prevent errors, and when there are fewer bugs, the customers are going to be happier,” you say. A customer is also quite happy if they get the utility they want out of their app as well. And sometimes they value that utility more than their own happiness. Understanding the customer is crucial here, of course. And it may entirely be the case that the customer values their happiness and overall sentiment more than just simple utility. The market and industry play important roles here too. But I want to focus on the headline here: radial vs. sequential, and how it fits into more than just software development. So just bear with me for a few minutes.
A radial app is something like Microsoft Word. I think about it in terms of how the app is being used. You walk into a building, and now you’re loaded into the app. Where can you go from here? With Microsoft Word, you can launch the app, and then it’s pretty guided to this point what you can do: either load an existing document or create a new one. But beyond that, the experience becomes very unguided. You can start typing, save whenever, insert a table of contents, launch spell checker — you can do so many different things, and in any order.
Ever tried to manage a great big state diagram for something like this? It becomes rapidly unfeasible really quickly. It’s difficult to test every permutation of states and the ways customers might use the app. That’s where unit tests gain their utility — so you don’t accidentally break a permutation that worked a week ago.
A sequential app, on the other hand, is on the opposite end of the spectrum. And do note that not every app is exactly black or white, just as it isn’t entirely sequential or radial. It’s somewhere on the spectrum. The sequential app is a more guided experience that often can be smoke-tested. Even Microsoft Word is sequential to a point, and thus, some of it is smoke-tested up to at least being able to open and save a document.
The sequential app is much more guided, even beyond standard launch, open, and save. It’s hard to think of open-ended, sort-of open-world experiences that are very sequential. But I’ve got one: a mortgage point-of-sale application, in which loan “files” are created or saved, much like a Word document. For the most part, many aspects of loan processing are very sequential. You need to run credit, then price out the loan, lock the rate, and then send out disclosures. Granted, it’s not entirely sequential — some electronic verification steps can be used instead of manual ones, and there’s often communication involved that makes each loan unique.
But still, much more than a Word document, the mortgage POS is very sequential. And this is where unit tests lose some of their utility. Anytime you need to fix bugs or add features to something like loan locking, it requires running credit and pricing a loan. The value of unit tests on pricing a loan is lower, because manual testing on locking necessitates manual testing on pricing. This isn’t to say the value of unit tests is moot, because permutations pervade all systems and edge cases always exist. It just means that the value is reduced, and so when gauging whether to focus on them, it’s harder to weigh against big-picture customer success metrics.
That’s enough of the software stuff though. Radial vs. Sequential spreads beyond just software, because these terms are more about experiences in general. Even construction projects, or any craftwork. Project developers for a new Costco have much to consider and even oblige in terms of safety and certifications. Inspections need to be done, and in a sense, inspections equate to unit tests — they represent work done by other agents.
In software development, unit tests are automated by cloud agents that run them prior to every new build/pull request. In construction, the unit tests are the inspectors, who are agents apart from the builders, coming by to give pass/fail grades across different areas. It’s actually nicer in construction because the Costco project developer just needs to order the inspections, and doesn’t really need to define what to inspect. In software, the developer must define the unit test, which makes it more expensive. The construction inspector has done his job many times before, so his experience is valuable, whereas that burden of experience falls on the software developer.
It’s fun to draw these parallels — and neater yet to understand that there are many similarities between construction and software development because they are all forms of development. As a software engineer myself, I’ve often been attuned to what I really love: to build things. It’s no surprise I’m an avid DIYer, and that passion flows into gardening too, which is another form of development. Although gardening is probably more focused on maintenance, drawing parallels there is fun too.
But what else? What about video games?
The Legend of Zelda on the NES — the very first release — that one is a very radial game. It features several dungeons where you must collect pieces of the Triforce, but it doesn’t restrict the order of your adventure. Granted, the first dungeon is much easier than the last, and that’s the only guidance. There are no mechanical guides or obstructions otherwise, save for a few — you need an arrow to kill some bosses, and you need a raft or bridge to access certain required areas. Considered one of the first “open world” games, Zelda is a radial game by design, much to the desires of its creator, who related the game to his own childhood exploration.
Mega Man is a game that mixes radial and sequential. The stage selection is entirely up to you, and that’s one of the things that makes the game an all-time great. There are inherently optimal orderings of stages, but it’s up to you to figure them out. After selecting a stage, however, the experience leading to its boss fight is entirely sequential and guided. When you defeat the boss of each stage, you gain their weapon powers, which are often optimal for defeating other bosses. Hence, the ordering can be important, but for the avid challenger, anything is possible. Mega Man blends radial and sequential beautifully, and it’s enjoyable because of that balance.
Now let’s venture to the sequential: Final Fantasy. The original NES game, and really almost every Final Fantasy that followed. These games are legends in the RPG genre, and they are fully sequential because they’re telling a story — much like a novel. And on that note, all books are sequential as well… except, of course, choose-your-own-adventure books, which are radial.
Back to Final Fantasy though — RPG fans love them for their storytelling, combined with the developmental nature of the progression. Players invest in their characters as they grow, level up, gear up, and fully immerse themselves in the protagonists. It mirrors the developmental aspects of what I love most, and what I think drives many engineers who also share passions for DIY, gardening, and RPG gaming.
Now, I’m sure sequential vs. radial isn’t an entirely unexplored topic. In fact, tropes like linear vs. non-linear appear more often in the literature. But I like to frame it my own way, pulling from gaming, because while the real world might be an unstructured, radial free-for-all where anything can happen, we can still escape into game worlds where things are more focused and guided.
Radial might represent the real world’s chaos of endless permutations and possibilities. Sequential helps control that chaos — and when possible, I see a lot of value in introducing sequential flow to make sense of it.
Joseph Krall
Life is a simulation; we all live in a mainframe called the universe.
Wednesday, August 20, 2025
Tuesday, May 13, 2025
AI: Our Organized Companions in a World of Unstructured Thought
If there’s one thing that fascinates me most about artificial intelligence, it’s not the sci-fi hype or the endless “robots taking over” headlines. It’s the idea that AI can be our companions; partners that help us make sense of the chaos in our minds and the world around us.
Let’s be honest; our brains are incredible, but they’re not exactly organized. We store information in a wild, unstructured jumble; memories triggered by smells, half-remembered facts popping up at random, ideas connecting in ways that make sense only to us. Our minds are more like a messy attic than a neatly labeled filing cabinet.
AI, on the other hand, is all about structure. It takes in massive amounts of information, organizes it, and makes it accessible in ways we simply can’t. That’s why, to me, AI isn’t some distant threat or magic bullet; it’s a tool; a powerful, organized companion that helps us bridge the gap between our unstructured thoughts and the structured world we need to operate in.
Here’s the key; AI doesn’t do the work for you. It doesn’t think for you. It’s not a replacement for your creativity, your judgment, or your unique perspective. Instead, it’s a tool; a really, really good one; that can help you organize, clarify, and communicate your ideas more effectively.
But with great power comes great responsibility. There’s a real risk in letting technology take over too quickly, which I discussed in my previous blogpost, “The story of the Grigori and penicillin.” Just as overusing antibiotics can lead to resistance, over-relying on AI without understanding or boundaries can leave us exposed, uncritical, and even less capable. The trick is to adopt AI thoughtfully, safely, and in a way that enhances; not replaces; our own abilities.
At their core, AI chatbots are built on large language models; giant, structured databases trained on everything from classic literature to today’s tweets. They’re designed to spot patterns, understand context, and generate responses that make sense. Think of them as encyclopedic companions who can summarize, clarify, and reformat information at lightning speed.
Why should we care? Because we’re living in an age of information overload. There’s more written word out there than any one person could read in a hundred lifetimes. AI can help us cut through the noise; summarizing long reports, clarifying confusing emails, converting data into readable formats, or just helping us get our thoughts in order.
But let’s not get carried away. AI isn’t perfect. It can misunderstand, make mistakes, or reflect biases in its training data. That’s why responsible use is so important. Don’t feed it sensitive information. Don’t blindly trust its output. Always review, question, and use your own judgment.
Ethical use means being aware of privacy, security, and the potential impact on others. It’s about using AI as a tool to amplify your strengths, not as a crutch that dulls your skills or judgment.
How to use AI effectively? Be clear and specific; the more detail you give, the better AI can help. Give context; let AI know the audience, tone, or format you want. Keep sessions focused; stick to one topic per session for best results. Iterate and refine; don’t expect perfection on the first try; tweak your prompts and learn from the output. Stay ethical; protect privacy and use AI in ways that are safe and appropriate.
AI is here, and it’s only getting better. The future belongs to those who see AI not as a rival, but as a companion; a structured partner to our unstructured minds. Use it wisely, adopt it thoughtfully, and let it help you become more organized, creative, and effective.
Don’t wait for the world to change around you. Start exploring, keep learning, and remember; the best results come when you combine the power of AI with the irreplaceable spark of human insight.
Let’s be honest; our brains are incredible, but they’re not exactly organized. We store information in a wild, unstructured jumble; memories triggered by smells, half-remembered facts popping up at random, ideas connecting in ways that make sense only to us. Our minds are more like a messy attic than a neatly labeled filing cabinet.
AI, on the other hand, is all about structure. It takes in massive amounts of information, organizes it, and makes it accessible in ways we simply can’t. That’s why, to me, AI isn’t some distant threat or magic bullet; it’s a tool; a powerful, organized companion that helps us bridge the gap between our unstructured thoughts and the structured world we need to operate in.
Here’s the key; AI doesn’t do the work for you. It doesn’t think for you. It’s not a replacement for your creativity, your judgment, or your unique perspective. Instead, it’s a tool; a really, really good one; that can help you organize, clarify, and communicate your ideas more effectively.
But with great power comes great responsibility. There’s a real risk in letting technology take over too quickly, which I discussed in my previous blogpost, “The story of the Grigori and penicillin.” Just as overusing antibiotics can lead to resistance, over-relying on AI without understanding or boundaries can leave us exposed, uncritical, and even less capable. The trick is to adopt AI thoughtfully, safely, and in a way that enhances; not replaces; our own abilities.
At their core, AI chatbots are built on large language models; giant, structured databases trained on everything from classic literature to today’s tweets. They’re designed to spot patterns, understand context, and generate responses that make sense. Think of them as encyclopedic companions who can summarize, clarify, and reformat information at lightning speed.
Why should we care? Because we’re living in an age of information overload. There’s more written word out there than any one person could read in a hundred lifetimes. AI can help us cut through the noise; summarizing long reports, clarifying confusing emails, converting data into readable formats, or just helping us get our thoughts in order.
But let’s not get carried away. AI isn’t perfect. It can misunderstand, make mistakes, or reflect biases in its training data. That’s why responsible use is so important. Don’t feed it sensitive information. Don’t blindly trust its output. Always review, question, and use your own judgment.
Ethical use means being aware of privacy, security, and the potential impact on others. It’s about using AI as a tool to amplify your strengths, not as a crutch that dulls your skills or judgment.
How to use AI effectively? Be clear and specific; the more detail you give, the better AI can help. Give context; let AI know the audience, tone, or format you want. Keep sessions focused; stick to one topic per session for best results. Iterate and refine; don’t expect perfection on the first try; tweak your prompts and learn from the output. Stay ethical; protect privacy and use AI in ways that are safe and appropriate.
AI is here, and it’s only getting better. The future belongs to those who see AI not as a rival, but as a companion; a structured partner to our unstructured minds. Use it wisely, adopt it thoughtfully, and let it help you become more organized, creative, and effective.
Don’t wait for the world to change around you. Start exploring, keep learning, and remember; the best results come when you combine the power of AI with the irreplaceable spark of human insight.
Thursday, May 8, 2025
AI is Here
A recurring story appears again and again in history and fiction: powerful knowledge or technology is given to those who aren’t ready for it, and chaos follows. From ancient myths to real-world breakthroughs, this cautionary tale reminds us that progress without responsibility can be dangerous. Today, as artificial intelligence advances at a rapid pace, this lesson feels more relevant than ever.
One clear example of this cautionary tale shows up in the sci-fi game Star Ocean: The Last Hope. In the game’s story, the Grigori are mysterious beings who accelerate the evolution of less advanced species. The Eldarians, an advanced alien race, once shared their technology with humanity, but the consequences were devastating. The Grigori’s influence led to chaos and destruction, making it clear that handing over powerful knowledge or tools to those unprepared almost always ends badly. Sometimes, the wisest choice is to let societies develop at their own pace, gaining the maturity needed to handle new power responsibly.
This theme isn’t just fiction. History offers its own version of this warning. In 1928, Alexander Fleming accidentally discovered penicillin, a breakthrough that would revolutionize medicine. But the world wasn’t immediately ready for it. Penicillin’s use was initially restricted, not so much to help everyone but more as a way to control power. It wasn’t until 1942, when a tragic nightclub fire in Boston created urgent demand, that penicillin was widely used to save lives. This real-world example echoes the Grigori story: even the greatest discoveries can cause unintended consequences if released too soon or without proper understanding. Timing, control, and responsibility matter just as much as innovation.
Fast forward to today, and we find ourselves facing a new chapter in this old story. AI has been developing for decades, and with recent advances in generative models like ChatGPT, it’s suddenly everywhere. Popular culture often paints AI as either a miracle or a menace, sometimes both. Many stories follow a familiar pattern: a breakthrough leads to misuse, conflict, or disaster. But are we really at that breaking point? Not yet. Much of what we call AI today isn’t intelligence in the human sense. It’s a powerful tool that uses vast amounts of data to generate text, images, and more based on patterns. It’s impressive, but it remains just a tool - not a sentient being.
The hype around AI replacing jobs or experts is real, and so is the fear. Yet the real risk isn’t the technology itself but how we choose to use it. Without proper understanding and control, AI can amplify problems like misinformation, scams, and security threats. But used wisely, it can also accelerate learning, creativity, and problem-solving.
So what’s the way forward? The story of the Grigori and penicillin teaches us that powerful tools require careful stewardship. We need to learn how AI works, set clear rules for its use, and remain vigilant about its risks. Rejecting AI outright isn’t the answer. Nor is blindly embracing it without caution. Instead, we must adapt, developing the skills, policies, and ethical frameworks that allow us to harness AI’s benefits while minimizing harm.
This is a new chapter in humanity’s ongoing story with technology. The choices we make now will shape whether AI becomes a force for good or a source of chaos.
The ancient tale of the Grigori, the historical journey of penicillin, and today’s AI revolution all share a common thread: powerful knowledge and technology come with great responsibility. AI is no different. It is here to stay, and it will change our world in profound ways. Our challenge is clear. We must control AI, not let it control us. By approaching it with humility, care, and wisdom, we can write a future where AI empowers humanity rather than endangers it. The story of AI is still being written. Let’s make sure it’s one worth telling.
One clear example of this cautionary tale shows up in the sci-fi game Star Ocean: The Last Hope. In the game’s story, the Grigori are mysterious beings who accelerate the evolution of less advanced species. The Eldarians, an advanced alien race, once shared their technology with humanity, but the consequences were devastating. The Grigori’s influence led to chaos and destruction, making it clear that handing over powerful knowledge or tools to those unprepared almost always ends badly. Sometimes, the wisest choice is to let societies develop at their own pace, gaining the maturity needed to handle new power responsibly.
This theme isn’t just fiction. History offers its own version of this warning. In 1928, Alexander Fleming accidentally discovered penicillin, a breakthrough that would revolutionize medicine. But the world wasn’t immediately ready for it. Penicillin’s use was initially restricted, not so much to help everyone but more as a way to control power. It wasn’t until 1942, when a tragic nightclub fire in Boston created urgent demand, that penicillin was widely used to save lives. This real-world example echoes the Grigori story: even the greatest discoveries can cause unintended consequences if released too soon or without proper understanding. Timing, control, and responsibility matter just as much as innovation.
Fast forward to today, and we find ourselves facing a new chapter in this old story. AI has been developing for decades, and with recent advances in generative models like ChatGPT, it’s suddenly everywhere. Popular culture often paints AI as either a miracle or a menace, sometimes both. Many stories follow a familiar pattern: a breakthrough leads to misuse, conflict, or disaster. But are we really at that breaking point? Not yet. Much of what we call AI today isn’t intelligence in the human sense. It’s a powerful tool that uses vast amounts of data to generate text, images, and more based on patterns. It’s impressive, but it remains just a tool - not a sentient being.
The hype around AI replacing jobs or experts is real, and so is the fear. Yet the real risk isn’t the technology itself but how we choose to use it. Without proper understanding and control, AI can amplify problems like misinformation, scams, and security threats. But used wisely, it can also accelerate learning, creativity, and problem-solving.
So what’s the way forward? The story of the Grigori and penicillin teaches us that powerful tools require careful stewardship. We need to learn how AI works, set clear rules for its use, and remain vigilant about its risks. Rejecting AI outright isn’t the answer. Nor is blindly embracing it without caution. Instead, we must adapt, developing the skills, policies, and ethical frameworks that allow us to harness AI’s benefits while minimizing harm.
This is a new chapter in humanity’s ongoing story with technology. The choices we make now will shape whether AI becomes a force for good or a source of chaos.
The ancient tale of the Grigori, the historical journey of penicillin, and today’s AI revolution all share a common thread: powerful knowledge and technology come with great responsibility. AI is no different. It is here to stay, and it will change our world in profound ways. Our challenge is clear. We must control AI, not let it control us. By approaching it with humility, care, and wisdom, we can write a future where AI empowers humanity rather than endangers it. The story of AI is still being written. Let’s make sure it’s one worth telling.
Wednesday, February 26, 2025
Stuff I do as Principal Software Engineer
It's been a long time since my last post - really, it's been about 5 years. In these last 5 years, I've matured significantly in my profession. When I last wrote, I was a Senior Software Engineer. Today, I write as a Principal Software Engineer, to discuss both what I do in that role and how it differs from my previous position.
When I joined the new company, I signed on as a Senior Software Engineer. My duties overall remained unchanged from those with the prior company. I would join the team and take on functional requirements, then implement them. I started with a few simpler changes, then quickly worked my way up to more complex ones, including a complete integration with a pricing and product engine - another kind of integration that was essential to the POS (Point of Sale) web platform we were building. Developing this integration allowed me to really show the first signs of my abilities, as I needed to coordinate with the third-party vendor we were integrating with to ask questions and clarify the process of connecting our POS to their service. Our POS platform, mind you, was not modern either, as isn't that atypical in the world. The POS project was riddled with technical debt across many convoluted layers of spaghetti code. Understanding how all these layers worked together was no small feat - but taking on the task of adding tons of documentation, including charts, flow diagrams, and database schematics that no one ever had the time to previously create - all this helped make sense of everything. As far as integrations go, pricing is one of the largest ones, right behind disclosure generation and passing all of the data over to an LOS (Loan Origination System).
With a major pricing integration delivered, it was becoming clear I was a major player. Then something interesting happened - in our small team, both the lead engineer and engineering director had resigned. There was no clear assignment of the duties, but there was an obvious vacuum of responsibility - one it seemed only I was suitable enough to consume. For me, responsibility and duty don't have the same tone that their words seem to imply - I rather am eager to do what needs to be done, and am avid at finding out what needs to be done. The team had been without a lead engineer, but in my role as a senior, I was still primarily a mentor throughout the team. As projects continued, more and more opportunities began to arise, and each time, I stood up to the challenge. First, it was a migration from TFS to Git. Then it was the migration of SDK usage to API usage within the LOS integration - our platform's single largest integration by far.
"SDK to API" as it was called was a project spanning about 12 months. I led both an onshore and offshore team of engineers as we tackled this project, starting with making key decisions about whether to simply upgrade the old SDK solution in place, or scrap it and build something brand new, using the latest version of .NET. The benefits of building something brand new were obvious, but it would mean also making changes to how the POS platform would call the new API solution. It was during this project that I petitioned for a promotion, given all that I had been doing - I was essentially both a lead engineer and a director for at least six months. I was easily given the promotion - to Principal Software Engineer - the role I argued I was performing (and wanted). Directorial duties involved the less-than-engineering aspects of management - aspects of the industry I didn't care for as much as technical leadership. Management itself is a needed function within the business - but I was happy to remain within the pure technical track as we hired a new director for the company.
Being an expert in the legacy platform, which still has value to this day until the modernization platform is fully fleshed out and caught up to the offerings of the legacy solutions, there was still a team to lead, vendor integrations to keep updated (occasionally vendor upgrades must be made as the various integrations become older), and even new features to develop to keep the business running while the wheels of modernization spin. Today, I act in the capacity of a Principal Software Engineer across the legacy and modernization teams. For the legacy team, I am the lead engineer. This is a waterfall team, but in a push to formal agile development, I lead the transformation through the various agile ceremonies. In recent sprints, the team is consistently making some of its first on-time full sprint completions, showing the first signs of true agile adoption and ability to plan out and meet deadlines.
In summary, being a Principal Software Engineer doesn't feel like a huge step up in responsibility - it's become part of who I am as I've matured within the role. I often ask the question in interviews - what do you think the difference is between a senior and lead engineer? I've heard lots of answers, and I can compile these thoughts with my own. I think of all the aspects of leadership, the one that stands out the most to me is to be not just a technical leader but also be a motivational leader. There's a somewhat unmeasurable quality inherent here to a leader's ability to derive rapport from the team members, to catalyze growth, both on the individual and collective levels, and to truly be a sort of mental beacon of support within and outside of the team. While a senior engineer might have the semblance of some of these qualities within the development group - to mentor other developers and make key technical decisions - the lead engineer transcends the technical boundaries and begins to mentor everyone within the team.
Anyway, that's it for now. I'll try to post more frequently than once every five years. Perhaps more details on the API layers I've built are in order - after all, I really do enjoy API development in general.
New Company
In 2021, I switched companies from one that worked on machinery diagnostics in a desktop application to one that develops a digital web solution for both borrowers and loan officers in the mortgage fintech space. The core tech stack remained the same - essentially .NET. But the two companies vary quite wildly in how that core tech stack is utilized. The prior company used .NET to build a desktop application with WPF and XAML. The newer company uses .NET as the backend to a web application built in Angular. The old one didn't really use APIs in the traditional sense, except to power automated testing. The new company primarily deals in APIs, if not from the Angular frontend to the .NET backend, then from the .NET backend to third-parties, to integrate with services such as ordering a credit report. The old one had an international presence and was a much larger company. The new one, being in the mortgage space, primarily focuses on the US market and has a much smaller team.When I joined the new company, I signed on as a Senior Software Engineer. My duties overall remained unchanged from those with the prior company. I would join the team and take on functional requirements, then implement them. I started with a few simpler changes, then quickly worked my way up to more complex ones, including a complete integration with a pricing and product engine - another kind of integration that was essential to the POS (Point of Sale) web platform we were building. Developing this integration allowed me to really show the first signs of my abilities, as I needed to coordinate with the third-party vendor we were integrating with to ask questions and clarify the process of connecting our POS to their service. Our POS platform, mind you, was not modern either, as isn't that atypical in the world. The POS project was riddled with technical debt across many convoluted layers of spaghetti code. Understanding how all these layers worked together was no small feat - but taking on the task of adding tons of documentation, including charts, flow diagrams, and database schematics that no one ever had the time to previously create - all this helped make sense of everything. As far as integrations go, pricing is one of the largest ones, right behind disclosure generation and passing all of the data over to an LOS (Loan Origination System).
With a major pricing integration delivered, it was becoming clear I was a major player. Then something interesting happened - in our small team, both the lead engineer and engineering director had resigned. There was no clear assignment of the duties, but there was an obvious vacuum of responsibility - one it seemed only I was suitable enough to consume. For me, responsibility and duty don't have the same tone that their words seem to imply - I rather am eager to do what needs to be done, and am avid at finding out what needs to be done. The team had been without a lead engineer, but in my role as a senior, I was still primarily a mentor throughout the team. As projects continued, more and more opportunities began to arise, and each time, I stood up to the challenge. First, it was a migration from TFS to Git. Then it was the migration of SDK usage to API usage within the LOS integration - our platform's single largest integration by far.
"SDK to API" as it was called was a project spanning about 12 months. I led both an onshore and offshore team of engineers as we tackled this project, starting with making key decisions about whether to simply upgrade the old SDK solution in place, or scrap it and build something brand new, using the latest version of .NET. The benefits of building something brand new were obvious, but it would mean also making changes to how the POS platform would call the new API solution. It was during this project that I petitioned for a promotion, given all that I had been doing - I was essentially both a lead engineer and a director for at least six months. I was easily given the promotion - to Principal Software Engineer - the role I argued I was performing (and wanted). Directorial duties involved the less-than-engineering aspects of management - aspects of the industry I didn't care for as much as technical leadership. Management itself is a needed function within the business - but I was happy to remain within the pure technical track as we hired a new director for the company.
Promotion
As a Principal Software Engineer, I continued all that I was doing thus far, especially with new development, mentoring new hires, conducting interviews as we expanded the company, and developing more policies to tighten down policies that were working, and modify the ones that weren't. I also worked with the new director, not only with onboarding but in building proposals for addressing our legacy platform's massive tech debt. It would be decided, much like with the SDK to API project, to rebuild the POS platform from scratch, in a new project to modernize our solutions. This would mean hiring more engineers, including lead engineers to run the new teams needed to build out the new modernization solution.Being an expert in the legacy platform, which still has value to this day until the modernization platform is fully fleshed out and caught up to the offerings of the legacy solutions, there was still a team to lead, vendor integrations to keep updated (occasionally vendor upgrades must be made as the various integrations become older), and even new features to develop to keep the business running while the wheels of modernization spin. Today, I act in the capacity of a Principal Software Engineer across the legacy and modernization teams. For the legacy team, I am the lead engineer. This is a waterfall team, but in a push to formal agile development, I lead the transformation through the various agile ceremonies. In recent sprints, the team is consistently making some of its first on-time full sprint completions, showing the first signs of true agile adoption and ability to plan out and meet deadlines.
In summary, being a Principal Software Engineer doesn't feel like a huge step up in responsibility - it's become part of who I am as I've matured within the role. I often ask the question in interviews - what do you think the difference is between a senior and lead engineer? I've heard lots of answers, and I can compile these thoughts with my own. I think of all the aspects of leadership, the one that stands out the most to me is to be not just a technical leader but also be a motivational leader. There's a somewhat unmeasurable quality inherent here to a leader's ability to derive rapport from the team members, to catalyze growth, both on the individual and collective levels, and to truly be a sort of mental beacon of support within and outside of the team. While a senior engineer might have the semblance of some of these qualities within the development group - to mentor other developers and make key technical decisions - the lead engineer transcends the technical boundaries and begins to mentor everyone within the team.
Anyway, that's it for now. I'll try to post more frequently than once every five years. Perhaps more details on the API layers I've built are in order - after all, I really do enjoy API development in general.
Friday, February 28, 2020
Why you should do more sprint-level regressions and shift-left

Of software development in an (at least somewhat) agile environment, one challenge I've always had was to allocate enough time to spend in regression. The word beckons the idea from statistics - to regress the amount of error to a minimum. While developing features in a sprint level fashion, it's important to ensure that before the sprint is finished, the feature has been regressed, having very little defect. As we develop more and more features that have not been regressed, well tested with a "test-left" strategy and hardened, then we punt on a plan makes the cost of fixing defects cheapest. Instead, we end up testing features later rather than sooner, when they are more expensive to fix. This is why regression periods need to occur during sprint, rather than at the end of a release cycle.
It is also much more enjoyable and preferable as a developer to fix bugs earlier, when they are discovered by the developer during sprint-level regression periods. When bugs are found in testing at the end of a release-cycle, a bug report is drafted and a work item is handed to the developer. This can have its own bevy of issues, spanning from the inability to properly communicate the problem and failing to convincingly sell the value of a fix. Some level of pride is sure to be impacted as well - no developer wants to create bugs, although all developers should know the value of a bug. It is a lesson we can all improve from and get better with age. It is certainly a thing of pride however, to develop a feature and deliver it to the end of a release-cycle without bug. To know that testers have thoroughly shifted through your code and have not found any problem is something to live for. And it should motivate us all to improve on our challenge - to get better at ensuring our features are completely "done-done".
It can be hard to justify additional time spent re-running acceptance criteria and re-evaluating manual test cases to product owners. They see merely the result of a sprint - a feature that has been delivered and demonstrated. So when we've finished developing our feature in 8-9 days, it can be difficult to explain the value of the last 5-6 days. Except by showing a decreased volume in the number of bugs near the end of a release-cycle, we have very little evidence mid-sprint. So I think all we can do is rely on the time-proven results from case studies performed by others in the industry. This stems from the topic of "shift-left", which means to shift testing more to the left to save on cost of fixing a bug.

Finding bugs late would mean having to dig up old details and re-learning the code area to figure out what to do for a fix. Even worse, when someone else finds a bug, it can make any developer feel stupid. Although honestly, no developer should ever feel truly offended - bugs are indeed a part of life and we do learn from them. And I don't think developers are do take offense. Writing software can be very euphoric, almost like a drug. When we go home in the evening, we usually feel pretty great about what we've done. But when bugs arise, it can take us down a peg or two. That's something we'd like to generally avoid, so suffice to say, bugs discovered late can lead to unhappy developers. And unhappy developers can lead to decreases in productivity. All the more reason to employ a shift to the left: make the time to find those bugs early.
To summarize: we should all make more time to complete our features before saying that they're "done". Stop taking on new work and start finishing the existing work. You'll thank yourself later. And so will product owners. In business environments where late-stage regression finds many bugs, it should stand as an important alarm that many developers still need to shift their testing much further to the left. Once we do that, we should start noticing a decrease in the number of bugs that arise late-stage.
More reading:
On the value of automated testing.
(2020) Details on what shift lest is.
(2017) Some case studies of improvements found in testing left
(2018) Testing left reduces bugs found and reduces overall cost.
(2008) Devs realize bugs are life, but are happy to find their own.
(2014) Feels stupid when others find your bugs
(2008) On the dev & tester relationship and finding too many bugs
(2018) Happy devs are more motivated, do better process related work
Wednesday, August 22, 2018
Categorizing Faults to Playability
Video games are much more enjoyable if they are replayable. But before they are even replayable, they need to first be playable in the first place. This pertains to limiting and mitigating distractions to playability. Distractions can include a wide array of faults and poor design decisions that hinder the player's ability to enjoy the game. Hence, an enjoyable video game must provide strong playability. Here, we try to categorize a few of these hindrances.
The rules change in other games as well, for example, in Bill and Ted's Excellent Video Game Adventure (NES), where jumping into grass typically causes the player to fall. But in some cases, jumping into grass has no effect. In Rocky and Bullwinkle (NES), the rules also change when stairs typically touch the top edge of the screen if they lead you to the next room above, but in one instance, the stairs don't touch the top edge of the screen and yet they still function like any other steps.
In Fester's Quest (NES), anytime the player dies, they must start from the very beginning - there are no "check points". This also happens in a lot of games, including Ninja Gaiden (NES) - if the player is defeated while facing the final boss in Act 6-4, they must, contrary to the norm up until that part, begin again from Act 6-1. Usually upon defeat, the player just begins at the previous checkpoint of the same stage, in which each stage typically has many checkpoints. Because Act 6 is one of the hardest in the game, this kind of punishment led to many players quitting before they ever beat the game.
In Super Pitfall (NES), there are items which are invisible. The player must visit everywhere, jump everywhere, touch every spot to determine where they are. Without any clear direction or sense of where to go, the player will start to lose their sense of intuition, which is a serious detriment to video games. If a player feels like they don't know how to play it, then they're going to stop playing. If the items were visible, this would give the player a sense of direction and a clear guidance system in what the player needs to do. This way, the player can feel a sense of accomplishment in knowing what to do, and experience in the mystery of discovering new items and finding out what they do.
Something similar happens in Castlevania 2 (NES) where the player must kneel down in a corner to summon or wait for a tornado to carry them off-screen. Since the wait time is more than just a second, this can become really confusing. The same thing happens in Earthbound (SNES) where the player has to wait behind a waterfall for 3 whole minutes to advance the game. At this point, some of these can feel more like interruptions to the game rather than actually playing and enjoying the game.
In many games, another common pitfall is when you can't see where gaps and death traps are. If the floor appears whole, but there are actually holes there, this can be real frustrating to the player. This might be fine for secret areas as long as they aren't necessary to advance the game, but when this becomes the norm for playing a game, it can seriously break the playability.
In Kid Kool (NES), the game appears to be a common side-scroller that slides left and right as the player advances forward and backward. However, there is also another screen to the top, and if the player visits that screen, the game has to scroll the window all-at-once resulting in a full second delay. This could be fine if the stages were well-designed. However, many of the stages result in the player visiting and leaving the top screen quickly, such as when they're on a high platform on the lower screen and have to jump across a pit - the jump transfers the player to the upper screen and as they player character falls, they are immediately transferred back to the lower screen. A jump should be a fluid action, and certainly and distractions along the jump are serious detriments to playability that make the game that much harder to complete.
In Castlevania 2 (NES), the game cycles from day to night. When this happens, the game freezes to display a message with very slow text that tells you just that. This provides about a 5-second delay to the player, and it can happen at anytime - including right in the middle of a jump or while combating an enemy. At least in some games, like Dragon Warrior (NES), the player is given the option to change the speed at which text is displayed.
In Eartbound (SNES), the same thing happens as the player's dad can sometimes call to "check-in". This results in a dialog pop-up and the text is long and arduous. As it happens repeatedly, it can be very frustrating.
In many games, the player moves too slow which can result in a disconnect of fluidity between the player and the game, essentially creating an interruption of sorts because the human mind can think faster than the game takes to actually execute the requested action. This ties into our next category.
Take driving a car for example - you never have to look down to see the pedals and as a result, you can keep your eyes focused on the road. The pedals provide a fluid interface between you and the car, meaning that you aren't too distracted while behind the wheel.
Sometimes, a game can be lacking in providing a fluid interface. These are distractions and they can seriously decrease the game's playability. In many games, there is a common complaint that the player can't kneel while they shoot, or that they can only shoot in so many directions. This lack of "full control" is a detriment to fluidity, as it can remind you that you are playing a game with limitations and must adhere to the decreased control.
Another comment is that which was mentioned in the previous category - that the controls are too slipper or that the character is too slow. Both of these again remind the player that they are inside the game and can break immersion.
In Fester's Quest (NES), sometimes the shots are curved, as in many other games. The shots become difficult to "aim" so that the curved shots impact enemy targets. These are also limitations that become distractions.
Even more frightening, sometimes developers come together without any game idea or story. Totally unprepared, its hard to imagine what they were thinking of coming up with.
Without a playable demo, a game is essentially untested before it goes down a path it cannot return from. Unfortunately, many games lack proper demos.
Sometimes companies can feel like they have experience and mastery over fun, such as with toy developers. But when it comes to games, that experience has no carry-over. Unfortunately, sometimes the the managers don't feel the same way.
Other times, games fail because of infrastructure and internal employee issues. If there is any kind of block on communicating suggestions or what some may feel are bad game decisions, perhaps because there is a risk in being reprimanded by the frightening bosses, then these suggestions are never effectively communicated and a bad game is produced.
In any case, it seems clear that the way forward to better games is to learn from the mistakes the past. If these summarize very briefly those mistakes at all, then perhaps newer developers can learn without these kinds of failure.
The Rules Change
Imagine playing a game where some rules are clearly define early. For example, falling into a pit means that the player loses the game and must start from the beginning. This is a form of punishment and through negative reinforcement, the player learns not to do that again. Now, imagine later in the game, it is necessary to jump into a pit to advance the game. The player will probably reach this point and not know what to do, unless by accident, they fell in the pit where the rules have changed. This happens in Super Pitfall (NES), where the player must jump into a flying bird enemy to active a warp zone to continue advancing the game. All other instances of the bird will kill the player and cause the player to lose progress, so the player has already learned to avoid birds. With the rules changed, most players are clueless on how to advance the game and will give up. Upon learning that the way forward was to jump into the bird, the player will feel even more frustrated, because now they realize they have to jump into every bird just to see if it kills them. This is brutal when punishment is also brutal.The rules change in other games as well, for example, in Bill and Ted's Excellent Video Game Adventure (NES), where jumping into grass typically causes the player to fall. But in some cases, jumping into grass has no effect. In Rocky and Bullwinkle (NES), the rules also change when stairs typically touch the top edge of the screen if they lead you to the next room above, but in one instance, the stairs don't touch the top edge of the screen and yet they still function like any other steps.
Imbalanced Punishment
Punishment is one way of teaching the player what not to do. When the punishment is too severe, it can cause the player to quit. Imagine playing through an arduous level, and just near the end of the stage, you have to try again from the beginning. This can make the game near impossible to win, although the excitation effect of finally winning can translate into immense euphoria. This fault to playability also fits in with the model of the Flow Zone, in which most players expect the game's difficulty to fall somewhere in between "too difficult" and "too easy". Harsh punishment can make the game too difficult, and a lack of punishment can make the game too easy, and in some cases, a very confusing and non-intuitive game.In Fester's Quest (NES), anytime the player dies, they must start from the very beginning - there are no "check points". This also happens in a lot of games, including Ninja Gaiden (NES) - if the player is defeated while facing the final boss in Act 6-4, they must, contrary to the norm up until that part, begin again from Act 6-1. Usually upon defeat, the player just begins at the previous checkpoint of the same stage, in which each stage typically has many checkpoints. Because Act 6 is one of the hardest in the game, this kind of punishment led to many players quitting before they ever beat the game.
Non-Intuitive
Intuitive games can be played with little to no direct guidance. These are games that "make sense" and lead the player where they need to be going. A game that isn't intuitive will cause the player to wonder what they need to do.In Super Pitfall (NES), there are items which are invisible. The player must visit everywhere, jump everywhere, touch every spot to determine where they are. Without any clear direction or sense of where to go, the player will start to lose their sense of intuition, which is a serious detriment to video games. If a player feels like they don't know how to play it, then they're going to stop playing. If the items were visible, this would give the player a sense of direction and a clear guidance system in what the player needs to do. This way, the player can feel a sense of accomplishment in knowing what to do, and experience in the mystery of discovering new items and finding out what they do.
Something similar happens in Castlevania 2 (NES) where the player must kneel down in a corner to summon or wait for a tornado to carry them off-screen. Since the wait time is more than just a second, this can become really confusing. The same thing happens in Earthbound (SNES) where the player has to wait behind a waterfall for 3 whole minutes to advance the game. At this point, some of these can feel more like interruptions to the game rather than actually playing and enjoying the game.
In many games, another common pitfall is when you can't see where gaps and death traps are. If the floor appears whole, but there are actually holes there, this can be real frustrating to the player. This might be fine for secret areas as long as they aren't necessary to advance the game, but when this becomes the norm for playing a game, it can seriously break the playability.
Interruptions
Interruptions are moments when the game stops the player from playing or slows their reactivity because the game needs a few moments to do something or display something. This can be related to network latency or perhaps poor system performance. But it can also be caused by poor game design or inefficient implementations. When the player is stopped, they can no longer respond and are forced to watch. In some games, this may be appropriate, such as with cut-scenes that show the reader how the story advances. However, such interruptions need to be kept to a minimum, and as always, different players have different tolerances to these kinds of things.In Kid Kool (NES), the game appears to be a common side-scroller that slides left and right as the player advances forward and backward. However, there is also another screen to the top, and if the player visits that screen, the game has to scroll the window all-at-once resulting in a full second delay. This could be fine if the stages were well-designed. However, many of the stages result in the player visiting and leaving the top screen quickly, such as when they're on a high platform on the lower screen and have to jump across a pit - the jump transfers the player to the upper screen and as they player character falls, they are immediately transferred back to the lower screen. A jump should be a fluid action, and certainly and distractions along the jump are serious detriments to playability that make the game that much harder to complete.
In Castlevania 2 (NES), the game cycles from day to night. When this happens, the game freezes to display a message with very slow text that tells you just that. This provides about a 5-second delay to the player, and it can happen at anytime - including right in the middle of a jump or while combating an enemy. At least in some games, like Dragon Warrior (NES), the player is given the option to change the speed at which text is displayed.
In Eartbound (SNES), the same thing happens as the player's dad can sometimes call to "check-in". This results in a dialog pop-up and the text is long and arduous. As it happens repeatedly, it can be very frustrating.
In many games, the player moves too slow which can result in a disconnect of fluidity between the player and the game, essentially creating an interruption of sorts because the human mind can think faster than the game takes to actually execute the requested action. This ties into our next category.
Lack of Fluidity
Fluidity is important in games, because as the player immerses into the game, there is an expected fluid interface between the player and the game. If this interface is lacking, the fluidity isn't optimal and it can be difficult to remain immersed.Take driving a car for example - you never have to look down to see the pedals and as a result, you can keep your eyes focused on the road. The pedals provide a fluid interface between you and the car, meaning that you aren't too distracted while behind the wheel.
Sometimes, a game can be lacking in providing a fluid interface. These are distractions and they can seriously decrease the game's playability. In many games, there is a common complaint that the player can't kneel while they shoot, or that they can only shoot in so many directions. This lack of "full control" is a detriment to fluidity, as it can remind you that you are playing a game with limitations and must adhere to the decreased control.
Another comment is that which was mentioned in the previous category - that the controls are too slipper or that the character is too slow. Both of these again remind the player that they are inside the game and can break immersion.
In Fester's Quest (NES), sometimes the shots are curved, as in many other games. The shots become difficult to "aim" so that the curved shots impact enemy targets. These are also limitations that become distractions.
Why do games fail playability?
After categorizing some of the common pitfalls to playability, one has to ask themselves, how did the game developers ever produce a game with these kinds of detriments? Some research suggests that the project leads or product owners don't actually play any games, so they don't really know what makes a game great or not. At the same time, they insist on owning the direction of the game and don't listen to any suggestions.
Sometimes, quality assurance is to blame. While developers and testers might be reporting these issues and even glitches, the developers don't fix them either because the bugs get listed as "As Designed" or a lack of funding or responsibility.
Even more frightening, sometimes developers come together without any game idea or story. Totally unprepared, its hard to imagine what they were thinking of coming up with.
Without a playable demo, a game is essentially untested before it goes down a path it cannot return from. Unfortunately, many games lack proper demos.
Sometimes companies can feel like they have experience and mastery over fun, such as with toy developers. But when it comes to games, that experience has no carry-over. Unfortunately, sometimes the the managers don't feel the same way.
Other times, games fail because of infrastructure and internal employee issues. If there is any kind of block on communicating suggestions or what some may feel are bad game decisions, perhaps because there is a risk in being reprimanded by the frightening bosses, then these suggestions are never effectively communicated and a bad game is produced.
In any case, it seems clear that the way forward to better games is to learn from the mistakes the past. If these summarize very briefly those mistakes at all, then perhaps newer developers can learn without these kinds of failure.
Tolerances to Playability
Every player is different. Some players have a high tolerance for faults to playability, and others have low tolerance. It suffices to say however, that minimizing these faults is a good start for any game developer. A successful game relies on players being able to play the game over and over. The more hours played, the better. At some point, the high hours of play turns into positive reviews which increase the virality of the game and generates more sales. Hence, playability is an important aspect of game development, even more important than replayability, because if you can't play a game in the first place, how can you expect to play it over and over?Saturday, May 5, 2018
Rating games according to the aspects of replayability - Deadly Towers
For more info on the game - https://en.wikipedia.org/wiki/Deadly_Towers
For more info on aspects of replability, refer to my technical journal paper, located at https://file.scirp.org/pdf/JSEA20120700001_38193851.pdf.
Social: 1
Challenge: 10
Experience: 8
Mastery: 8
Impact: 4
Completion: 6
Playability: 2
Deadly Towers is a notoriously horrific game of the NES era. It features many reasons why it may be hard to continue playing, such as starting from the beginning when you die. Most of these affect the game's playability and overall enjoyability. Placing the game on the Schemico spectrum actually gives the game some positive merits, and with some improvements to the playability of the game, its sounds and enemy interactions, the game might've been one of the greatest NES hits.
For social reasons, there's typically very little reason in playing classic NES games, except for the off-chance of playing with a group of friends. While we don't play directly against one another, we might play by taking turns or by watching each other play. For that reason, I give the game a single point in the social category.
The level of challenge in Deadly Towers is quite high, simply because the game is actually pretty tough to beat. The maze-like dungeons can be quite large and difficult to navigate (although a mini-map would help playability). Shops for the purchase of potions, better equipment and other required items are sporadically located in those dungeons with very little direction on how to achieve. Enemies can do quite a lot of damage, in addition to the knock-back effects that they incur. Bosses can be quite punishing and difficult. And overall, to reach a boss and destroy each bell tower, while deaths can take you back to the beginning, is quite a hurdle to overcome, although this could be a hit on the game's playability. Overall, if one were able to beat Deadly Towers and see the ending credits, that would be quite the accomplishment - although it may be more of an achievement in patience. For these reasons, I placed the game's challenge level all the way at the top, at a 10.
The experience in playing Deadly Towers is quite unique. The gameplay offers a unique style of play, and the music is somewhat catchy (https://www.youtube.com/watch?v=JYIJzJb_r4w), although grossly repetitive (and it also restarts as you visit each different "room" in the game). The story is somewhat interesting, although there is no interaction with the story throughout the game until defeating the final boss: you play the character of Prince Myer on the eve of his coronation, who is given the ominous warning by a strange shadowy figure that Rubas, "the devil of darkness" is soon coming and plans to use the seven magic bells to summon an army and overtake the kingdom. Hence, Prince Myer must journey to the northern mountains, venture into each tower, collect the bells and burn them to prevent this from happening, and then finally defeat the devil Rubas himself. For all this, I give Deadly Towers an experience placement of an 8.
For mastery reasons, players may have a few good reasons to play Deadly Towers. Speedruns are usually a fun resource to study for this category. While TAS (tool-assisted speedrun) tools may exploit some of the game's bugs, RTS (real-time speedruns) generally offer a better glimpse of how people master the game. See the link below for a decent speedrun by Youtuber WebNations on how Jeff Feasel plays the game. I place Deadly Towers an 8 on mastery because the game is very challenging, not only on the enemy scale, but also in terms of finding difficult to locate items, navigate maze-like dungeons and find the best gear in the so-called "Parallel Zones" and "Secret Rooms".
https://www.youtube.com/watch?v=SfSngofeaus
Impact is a reason players may stay in the game because they want to experience the game in different variations. Deadly Towers is essentially a game about journeying to several castles and towers, but fortunately you can choose the order in which you do the towers. You can also choose which gear you want to quest for, and you certainly don't need any - although most probably make the game easier, defensively and/or offensively. For these few points, I place Deadly Towers low on the scale at a 4.
Lastly, the completion aspect is a reason players may play because they want to obtain everything possible. In Deadly Towers, there are hearts that you can find that increase your maximum life. There are also "Parallel Zones" and "Secret Rooms" which are hidden points on the map that you find just by wandering into them. Usually those places house some of the game's best gear. Additionally, there are maze-like dungeons which are also hidden, and can be difficult at best to exit. The game's variety of items also offer reason to explore and discover what they all do - as some teleport you different areas in the game. For all these reasons, I rate Deadly Towers as having a moderate level of completion, at a 6.
The playability aspect of Deadly Towers is the game's low point. Any time you die, you return to the beginning in a world where journeying to each tower is pretty arduous to begin with. There are at times, too many enemies on the screen which make it difficult to advance for the normal player who has not spent much time mastering the game. These enemies can knock you back, sometimes into death pits. There are also hidden zones you may haplessly wander into without much hope of exit. The music restarts its loop on every map change - and in the maze-like dungeons, you will be restarting the music every few seconds. Items are hard to locate, and better gear is hidden in "Parallel Zones" and "Secret Rooms". The sword mechanics of a slow flying sword are questionable when you can only have one sword at a time on the screen (without extra booster items). Some enemies take an unusually large number of hits to kill. The in game menu can be confusing to use. These all hurt the game's playability, an even with great touches on the aspects of replability listed above, a poor level of playability can override all of that. For all these reasons, I rank the game's playability fairly low, at a 2.
For more info on aspects of replability, refer to my technical journal paper, located at https://file.scirp.org/pdf/JSEA20120700001_38193851.pdf.
Social: 1
Challenge: 10
Experience: 8
Mastery: 8
Impact: 4
Completion: 6
Playability: 2
Deadly Towers is a notoriously horrific game of the NES era. It features many reasons why it may be hard to continue playing, such as starting from the beginning when you die. Most of these affect the game's playability and overall enjoyability. Placing the game on the Schemico spectrum actually gives the game some positive merits, and with some improvements to the playability of the game, its sounds and enemy interactions, the game might've been one of the greatest NES hits.
Social
Challenge
Experience
Mastery
https://www.youtube.com/watch?v=SfSngofeaus
Impact
Completion
Playability
Subscribe to:
Posts (Atom)