Sorry for Hacking

It is my job, though

In partnership with

So I'm going to be getting rid of the video weeks, stats don't lie and stats say newsletter people weren't into it so onward and upward. Instead of videos in the off-weeks, I'm going to be talking some stuff related to my day job of penetration testing (it's not lewd, I promise).

I work in offensive security which means companies pay to get a report from me where I detail how I was (or wasn't... but usually was =P) able to hack into something of theirs. It can vary from a single web or mobile application all the way to "emulate a bad actor and do your worst to us" - which is always a blast working from the outside and trying to get as far into the company as possible.

Being in that space I've got to stay on top of a lot of the security and tech. related (AI is everywhere) stuff because things change so fast and those changes can have huge impacts overnight.

For example, I was on an assessment where a working exploit was dropped on TWITTER (the artist formerly known as?) on a piece of tech and we were able to use parts of that the next morning to get initial access. From hardened external presence to getting completely compromised overnight... because of twitter. It's awesome! Anyway, it'll be focused primarily on those sorts of things and this'll be the first of the series, thanks for reading!

Sorry For Hacking

Table of Contents

The newsletter every professional should be reading

There’s a reason Morning Brew is the gold standard of business news—it’s the easiest and most enjoyable way to stay in the loop on all the headlines impacting your world.

Tech, finance, sales, marketing, and everything in between—we’ve got it all. Just the stuff that matters, served up in a fast, fun read.

Look—over 4 million professionals start their day with Morning Brew’s daily newsletter, and it only takes 5 minutes to read. Sign up for free and see for yourself!

(Hack History) Fishtank Casino Hack

This gets technical, bold words/phrases I’ve got definitions for below.

The fish tank hack represents a textbook (is there a textbook of hacks yet?) case of attack surface expansion through IoT integration. Threat actors identified and exploited a fundamental architectural weakness: a smart thermometer in a friggin' casino's aquarium with external connectivity but minimal security controls. This device - integrated into the broader network without proper segmentation - provided lateral movement capabilities that bypassed traditional perimeter defenses. The attackers leveraged this position to establish persistence, elevate privileges, and ultimately exfiltrate approximately 10GB of sensitive data to command and control infrastructure in Finland. The breach perfectly illustrates how operational technology additions create asymmetric risk when implemented without security-by-design principles.

What makes this incident particularly significant is its demonstration of the practical limitations of defense-in-depth strategies that fail to account for nontraditional attack vectors. The casino had undoubtedly invested heavily in conventional security controls - firewalls, IDS/IPS systems, access management, and physical security - yet was compromised through what security teams likely considered a negligible risk component (or weren't informed at all). This case study has since become emblematic of why security architecture must extend beyond traditional IT infrastructure to encompass all connected systems, regardless of their apparent triviality. Your network is only as secure as its weakest connected device, even if that device is just there to keep the fish comfortable.

Non-techie definitions

  • Attack Surface Expansion: Once you get the initial break in the defenses of a target one of the first things you want to do is try and find as many other avenues as possible that could also be used to get in. Many times once you get a peak behind the curtain with that initial compromise, you're able to identify other ways that could be used that weren't visible from the outside. It's also rather important just in case your initial point of entry gets burned.

  • IoT: “Internet of things”, basically all the “smart” stuff we’ve all got in our homes like fridges, speakers, door locks, etc. that are connected to the internet for… reasons.

  • Proper segmentation: A good practice companies will have in their networks is to isolate certain sensitive areas from other, less sensitive areas. In this case it would seem the victim network was rather "flat" so any compromise within the network meant it could be used to access any other part of the network (guessing).

  • Lateral movement: Essentially moving within the network to different areas. An example would be compromising a user then finding credentials on their machine to access some other application/server/whatever.

  • Security-by-design principles: Not a definition but with IoT in general, they are often made vulnerable by default as opposed to the opposite. They're getting better but I wouldn't say it's good yet.

Links:

(AI) AI Voice-Cloning Scams

The AI voice-cloning market has become a veritable candy store for scammers, with Consumer Reports confirming what security professionals have been shouting into the void for years: most products couldn't care less about preventing fraud. These technologies - capable of photocopying your vocal identity with just seconds of audio - offer practically zero technical safeguards against misuse. Four of six tested services allow users to create voice clones from any public audio, without consent verification mechanisms, and often for free. I tested this myself with a local installation I pulled off huggingface and had David Attenborough saying mean things to some buddies on Discord. It took me about twenty minutes with most of that time being me trying to figure out what to say. With 845,000 imposter scams reported in 2024 alone, we're witnessing the predictable consequence of deploying powerful technology with the security equivalent of "please don't be evil" sticky notes.

The tech industry's collective shrug toward implementing meaningful guardrails exposes a fundamental disconnect between innovation speed and ethical responsibility. While AI developers race to market with increasingly convincing voice replicas - now past the "uncanny valley" where human ears can detect the difference - they've conveniently outsourced the ethical considerations to voluntary guidelines that carry all the enforcement power of a suggestion. The technology's absolutely rad as hell, easily accessible, and I absolutely don't have an answer to stop the abuse of it because... it's easily accessible and rad as hell. I think that likely this is going to be one of those things we have to learn to work around and acknowledge as the new normal and we'll have to wait and see what sort of long term societal changes it'll bring.

Links:

(Wild) The Loneliness Epidemic

The modern loneliness epidemic has created ideal conditions for romance scammers, who have extracted nearly $4.5 billion from victims over the past decade. These criminals aren't just running technical exploits - they're executing precisely crafted psychological manipulations that bypass rational thinking. Their methodology follows a consistent pattern: establish contact, rapidly build artificial intimacy through "love bombing," introduce fictional financial problems, and then manipulate victims into volunteering financial assistance. What makes these scams particularly effective is how they create the illusion of choice - victims believe they're making autonomous decisions to help someone they care about, not being manipulated. While romance scammers traditionally faced limitations in how many victims they could simultaneously engage, AI tools now enable them to manage hundreds of conversations across multiple languages, dramatically increasing their operational capacity and potential profits.

Social isolation creates a dangerous vulnerability cycle that's increasingly difficult to break. Once victimized, people often become more isolated through shame and financial loss, making them prime targets for re-victimization - as evidenced by the case of a victim who returned to their scammer "just to see his photos" despite knowing they were being defrauded. While AI has given scammers powerful new tools, it could potentially serve defensive purposes as well - dating platforms could implement systems that detect linguistic patterns associated with manipulation tactics. Until such protections become widespread, our most effective defense remains genuine human connection and community support - resources that, unfortunately, seem increasingly scarce in modern society.

A family member of mine has a good friend currently going through this and they have spent somewhere in the realm of $50,000 on this scam without ever actually meeting their "love interest" in person. The victim has got family and friends telling them it's a scam, some even took the victim to the bank to have them explain the whole way the scam works - nope, doesn't care and everyone else is stupid, not them. It's an incredibly cruel way to go about scamming people, especially because the victims are often in retirement ie. technologically toddlers with no idea how or why anyone would scam them in such a way. If you've got a relative in that demographic that could be lookin' for love, it wouldn't be a bad idea to let them know this sort of thing exists and stuff to look out for.

Links:

Thanks for reading!

Jake

Reply

or to participate.