Tuesday, May 13, 2025

AI: Our Organized Companions in a World of Unstructured Thought

If there’s one thing that fascinates me most about artificial intelligence, it’s not the sci-fi hype or the endless “robots taking over” headlines. It’s the idea that AI can be our companions; partners that help us make sense of the chaos in our minds and the world around us.

Let’s be honest; our brains are incredible, but they’re not exactly organized. We store information in a wild, unstructured jumble; memories triggered by smells, half-remembered facts popping up at random, ideas connecting in ways that make sense only to us. Our minds are more like a messy attic than a neatly labeled filing cabinet.

AI, on the other hand, is all about structure. It takes in massive amounts of information, organizes it, and makes it accessible in ways we simply can’t. That’s why, to me, AI isn’t some distant threat or magic bullet; it’s a tool; a powerful, organized companion that helps us bridge the gap between our unstructured thoughts and the structured world we need to operate in.

Here’s the key; AI doesn’t do the work for you. It doesn’t think for you. It’s not a replacement for your creativity, your judgment, or your unique perspective. Instead, it’s a tool; a really, really good one; that can help you organize, clarify, and communicate your ideas more effectively.

But with great power comes great responsibility. There’s a real risk in letting technology take over too quickly, which I discussed in my previous blogpost, “The story of the Grigori and penicillin.” Just as overusing antibiotics can lead to resistance, over-relying on AI without understanding or boundaries can leave us exposed, uncritical, and even less capable. The trick is to adopt AI thoughtfully, safely, and in a way that enhances; not replaces; our own abilities.

At their core, AI chatbots are built on large language models; giant, structured databases trained on everything from classic literature to today’s tweets. They’re designed to spot patterns, understand context, and generate responses that make sense. Think of them as encyclopedic companions who can summarize, clarify, and reformat information at lightning speed.

Why should we care? Because we’re living in an age of information overload. There’s more written word out there than any one person could read in a hundred lifetimes. AI can help us cut through the noise; summarizing long reports, clarifying confusing emails, converting data into readable formats, or just helping us get our thoughts in order.

But let’s not get carried away. AI isn’t perfect. It can misunderstand, make mistakes, or reflect biases in its training data. That’s why responsible use is so important. Don’t feed it sensitive information. Don’t blindly trust its output. Always review, question, and use your own judgment.

Ethical use means being aware of privacy, security, and the potential impact on others. It’s about using AI as a tool to amplify your strengths, not as a crutch that dulls your skills or judgment.

How to use AI effectively? Be clear and specific; the more detail you give, the better AI can help. Give context; let AI know the audience, tone, or format you want. Keep sessions focused; stick to one topic per session for best results. Iterate and refine; don’t expect perfection on the first try; tweak your prompts and learn from the output. Stay ethical; protect privacy and use AI in ways that are safe and appropriate.

AI is here, and it’s only getting better. The future belongs to those who see AI not as a rival, but as a companion; a structured partner to our unstructured minds. Use it wisely, adopt it thoughtfully, and let it help you become more organized, creative, and effective.

Don’t wait for the world to change around you. Start exploring, keep learning, and remember; the best results come when you combine the power of AI with the irreplaceable spark of human insight.

Thursday, May 8, 2025

AI is Here

A recurring story appears again and again in history and fiction: powerful knowledge or technology is given to those who aren’t ready for it, and chaos follows. From ancient myths to real-world breakthroughs, this cautionary tale reminds us that progress without responsibility can be dangerous. Today, as artificial intelligence advances at a rapid pace, this lesson feels more relevant than ever.

One clear example of this cautionary tale shows up in the sci-fi game Star Ocean: The Last Hope. In the game’s story, the Grigori are mysterious beings who accelerate the evolution of less advanced species. The Eldarians, an advanced alien race, once shared their technology with humanity, but the consequences were devastating. The Grigori’s influence led to chaos and destruction, making it clear that handing over powerful knowledge or tools to those unprepared almost always ends badly. Sometimes, the wisest choice is to let societies develop at their own pace, gaining the maturity needed to handle new power responsibly.

This theme isn’t just fiction. History offers its own version of this warning. In 1928, Alexander Fleming accidentally discovered penicillin, a breakthrough that would revolutionize medicine. But the world wasn’t immediately ready for it. Penicillin’s use was initially restricted, not so much to help everyone but more as a way to control power. It wasn’t until 1942, when a tragic nightclub fire in Boston created urgent demand, that penicillin was widely used to save lives. This real-world example echoes the Grigori story: even the greatest discoveries can cause unintended consequences if released too soon or without proper understanding. Timing, control, and responsibility matter just as much as innovation.

Fast forward to today, and we find ourselves facing a new chapter in this old story. AI has been developing for decades, and with recent advances in generative models like ChatGPT, it’s suddenly everywhere. Popular culture often paints AI as either a miracle or a menace, sometimes both. Many stories follow a familiar pattern: a breakthrough leads to misuse, conflict, or disaster. But are we really at that breaking point? Not yet. Much of what we call AI today isn’t intelligence in the human sense. It’s a powerful tool that uses vast amounts of data to generate text, images, and more based on patterns. It’s impressive, but it remains just a tool - not a sentient being.

The hype around AI replacing jobs or experts is real, and so is the fear. Yet the real risk isn’t the technology itself but how we choose to use it. Without proper understanding and control, AI can amplify problems like misinformation, scams, and security threats. But used wisely, it can also accelerate learning, creativity, and problem-solving.

So what’s the way forward? The story of the Grigori and penicillin teaches us that powerful tools require careful stewardship. We need to learn how AI works, set clear rules for its use, and remain vigilant about its risks. Rejecting AI outright isn’t the answer. Nor is blindly embracing it without caution. Instead, we must adapt, developing the skills, policies, and ethical frameworks that allow us to harness AI’s benefits while minimizing harm.

This is a new chapter in humanity’s ongoing story with technology. The choices we make now will shape whether AI becomes a force for good or a source of chaos.

The ancient tale of the Grigori, the historical journey of penicillin, and today’s AI revolution all share a common thread: powerful knowledge and technology come with great responsibility. AI is no different. It is here to stay, and it will change our world in profound ways. Our challenge is clear. We must control AI, not let it control us. By approaching it with humility, care, and wisdom, we can write a future where AI empowers humanity rather than endangers it. The story of AI is still being written. Let’s make sure it’s one worth telling.

Wednesday, February 26, 2025

Stuff I do as Principal Software Engineer

It's been a long time since my last post - really, it's been about 5 years. In these last 5 years, I've matured significantly in my profession. When I last wrote, I was a Senior Software Engineer. Today, I write as a Principal Software Engineer, to discuss both what I do in that role and how it differs from my previous position.

New Company

In 2021, I switched companies from one that worked on machinery diagnostics in a desktop application to one that develops a digital web solution for both borrowers and loan officers in the mortgage fintech space. The core tech stack remained the same - essentially .NET. But the two companies vary quite wildly in how that core tech stack is utilized. The prior company used .NET to build a desktop application with WPF and XAML. The newer company uses .NET as the backend to a web application built in Angular. The old one didn't really use APIs in the traditional sense, except to power automated testing. The new company primarily deals in APIs, if not from the Angular frontend to the .NET backend, then from the .NET backend to third-parties, to integrate with services such as ordering a credit report. The old one had an international presence and was a much larger company. The new one, being in the mortgage space, primarily focuses on the US market and has a much smaller team.

When I joined the new company, I signed on as a Senior Software Engineer. My duties overall remained unchanged from those with the prior company. I would join the team and take on functional requirements, then implement them. I started with a few simpler changes, then quickly worked my way up to more complex ones, including a complete integration with a pricing and product engine - another kind of integration that was essential to the POS (Point of Sale) web platform we were building. Developing this integration allowed me to really show the first signs of my abilities, as I needed to coordinate with the third-party vendor we were integrating with to ask questions and clarify the process of connecting our POS to their service. Our POS platform, mind you, was not modern either, as isn't that atypical in the world. The POS project was riddled with technical debt across many convoluted layers of spaghetti code. Understanding how all these layers worked together was no small feat - but taking on the task of adding tons of documentation, including charts, flow diagrams, and database schematics that no one ever had the time to previously create - all this helped make sense of everything. As far as integrations go, pricing is one of the largest ones, right behind disclosure generation and passing all of the data over to an LOS (Loan Origination System).

With a major pricing integration delivered, it was becoming clear I was a major player. Then something interesting happened - in our small team, both the lead engineer and engineering director had resigned. There was no clear assignment of the duties, but there was an obvious vacuum of responsibility - one it seemed only I was suitable enough to consume. For me, responsibility and duty don't have the same tone that their words seem to imply - I rather am eager to do what needs to be done, and am avid at finding out what needs to be done. The team had been without a lead engineer, but in my role as a senior, I was still primarily a mentor throughout the team. As projects continued, more and more opportunities began to arise, and each time, I stood up to the challenge. First, it was a migration from TFS to Git. Then it was the migration of SDK usage to API usage within the LOS integration - our platform's single largest integration by far.

"SDK to API" as it was called was a project spanning about 12 months. I led both an onshore and offshore team of engineers as we tackled this project, starting with making key decisions about whether to simply upgrade the old SDK solution in place, or scrap it and build something brand new, using the latest version of .NET. The benefits of building something brand new were obvious, but it would mean also making changes to how the POS platform would call the new API solution. It was during this project that I petitioned for a promotion, given all that I had been doing - I was essentially both a lead engineer and a director for at least six months. I was easily given the promotion - to Principal Software Engineer - the role I argued I was performing (and wanted). Directorial duties involved the less-than-engineering aspects of management - aspects of the industry I didn't care for as much as technical leadership. Management itself is a needed function within the business - but I was happy to remain within the pure technical track as we hired a new director for the company.

Promotion

As a Principal Software Engineer, I continued all that I was doing thus far, especially with new development, mentoring new hires, conducting interviews as we expanded the company, and developing more policies to tighten down policies that were working, and modify the ones that weren't. I also worked with the new director, not only with onboarding but in building proposals for addressing our legacy platform's massive tech debt. It would be decided, much like with the SDK to API project, to rebuild the POS platform from scratch, in a new project to modernize our solutions. This would mean hiring more engineers, including lead engineers to run the new teams needed to build out the new modernization solution.

Being an expert in the legacy platform, which still has value to this day until the modernization platform is fully fleshed out and caught up to the offerings of the legacy solutions, there was still a team to lead, vendor integrations to keep updated (occasionally vendor upgrades must be made as the various integrations become older), and even new features to develop to keep the business running while the wheels of modernization spin. Today, I act in the capacity of a Principal Software Engineer across the legacy and modernization teams. For the legacy team, I am the lead engineer. This is a waterfall team, but in a push to formal agile development, I lead the transformation through the various agile ceremonies. In recent sprints, the team is consistently making some of its first on-time full sprint completions, showing the first signs of true agile adoption and ability to plan out and meet deadlines.

In summary, being a Principal Software Engineer doesn't feel like a huge step up in responsibility - it's become part of who I am as I've matured within the role. I often ask the question in interviews - what do you think the difference is between a senior and lead engineer? I've heard lots of answers, and I can compile these thoughts with my own. I think of all the aspects of leadership, the one that stands out the most to me is to be not just a technical leader but also be a motivational leader. There's a somewhat unmeasurable quality inherent here to a leader's ability to derive rapport from the team members, to catalyze growth, both on the individual and collective levels, and to truly be a sort of mental beacon of support within and outside of the team. While a senior engineer might have the semblance of some of these qualities within the development group - to mentor other developers and make key technical decisions - the lead engineer transcends the technical boundaries and begins to mentor everyone within the team.

Anyway, that's it for now. I'll try to post more frequently than once every five years. Perhaps more details on the API layers I've built are in order - after all, I really do enjoy API development in general.