Episodes
Sunday Feb 26, 2023
Sunday Feb 26, 2023
Government threats to end-to-end encryption—the technology that secures your messages and shared photos and videos—have been around for decades, but the most recent threats to this technology are unique in how they intersect with a broader, sometimes-global effort to control information on the Internet.
Take two efforts in the European Union and the United Kingdom. New proposals there would require companies to scan any content that their users share with one another for Child Sexual Abuse Material, or CSAM. If a company offers end-to-end encryption to its users, effectively locking the company itself out of being able to access the content that its users share, then it's tough luck for those companies. They will still be required to find a way to essentially do the impossible—build a system that keeps everyone else out, while letting themselves and the government in.
While these government proposals may sound similar to previous global efforts to weaken end-to-end encryption in the past, like the United States' prolonged attempt to tarnish end-to-end encryption by linking it to terrorist plots, they differ because of how easily they could become tools for censorship.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Mallory Knodel, chief technology officer for Center for Democracy and Technology, about new threats to encryption, old and bad repeated proposals, who encryption benefits (everyone), and how building a tool to detect one legitimate harm could, in turn, create a tool to detect all sorts of legal content that other governments simply do not like.
"In many places of the world where there's not such a strong feeling about individual and personal privacy, sometimes that is replaced by an inability to access mainstream media, news, accurate information, and so on, because there's a heavy censorship regime in place," Knodel said. "And I think that drawing that line between 'You're going to censor child sexual abuse material, which is illegal and disgusting and we want it to go away,' but it's so very easy to slide that knob over into 'Now you're also gonna block disinformation,' and you might at some point, take it a step further and block other kinds of content, too, and you just continue down that path."
Knodel continued:
"Then you do have a pretty easy way of mass-censoring certain kinds of content from the Internet that probably shouldn't be censored."
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Feb 12, 2023
What is AI ”good” at (and what the heck is it, actually), with Josh Saxe
Sunday Feb 12, 2023
Sunday Feb 12, 2023
In November of last year, the AI research and development lab OpenAI revealed its latest, most advanced language project: A tool called ChatGPT.
ChatGPT is so much more than "just" a chatbot. As users have shown with repeated testing and prodding, ChatGPT seems to "understand" things. It can give you recipes that account for whatever dietary restrictions you have. It can deliver basic essays about moments in history. It can—and has been—used to cheat by university students who are giving a new meaning to plagiarism, stealing work that is not theirs. It can write song lyrics about X topic as though composed by Y artist. It can even have fun with language.
For example, when ChatGPT was asked to “Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR,” ChatGPT responded in part:
“And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying ‘Oh Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.’”
Is this fun? Yes. Is it interesting? Absolutely. But what we're primarily interested about in today's episode of Lock and Code, with host David Ruiz, is where artificial intelligence and machine learning—ChatGPT included—can be applied to cybersecurity, because as some users have already discovered, ChatGPT can be used to some success to analyze lines of code for flaws.
It is a capability that has likely further energized the multibillion-dollar endeavor to apply AI to cybersecurity.
Today, on Lock and Code, we speak to Joshua Saxe about what machine learning is "good" at, what problems it can make worse, whether we have defenses to those problems, and what place machine learning and artificial intelligence have in the future of cybersecurity. According to Saxe, there are some areas where, under certain conditions, machine learning will never be able to compete.
"If you're, say, gonna deploy a set of security products on a new computer network that's never used your security products before, and you want to detect, for example, insider threats—like insiders moving files around in ways that look suspicious—if you don't have any known examples of people at the company doing that, and also examples of people not doing that, and if you don't have thousands of known examples of people at the company doing that, that are current and likely to reoccur in the future, machine learning is just never going to compete with just manually writing down some heuristics around what we think bad looks like."
Saxe continued:
"Because basically in this case, the machine learning is competing with the common sense model of the world and expert knowledge of a security analyst, and there's no way machine learning is gonna compete with the human brain in this context."
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Jan 29, 2023
Sunday Jan 29, 2023
In 2020, a photo of a woman sitting on a toilet—her shorts pulled half-way down her thighs—was shared on Facebook, and it was shared by someone whose job it was to look at that photo and, by labeling the objects in it, help train an artificial intelligence system for a vacuum.
Bizarre? Yes. Unique? No.
In December, MIT Technology Review investigated the data collection and sharing practices of the company iRobot, the developer of the popular self-automated Roomba vacuums. In their reporting, MIT Technology Review discovered a series of 15 images that were all captured by development versions of Roomba vacuums. Those images were eventually shared with third-party contractors in Venezuela who were tasked with the responsibility of "annotation"—the act of labeling photos with identifying information. This work of, say, tagging a cabinet as a cabinet, or a TV as a TV, or a shelf as a shelf, would help the robot vacuums "learn" about their surroundings when inside people's homes.
In response to MIT Technology Review's reporting, iRobot stressed that none of the images found by the outlet came from customers. Instead, the images were "from iRobot development robots used by paid data collectors and employees in 2020." That meant that the images were from people who agreed to be part of a testing or "beta" program for non-public versions of the Roomba vacuums, and that everyone who participated had signed an agreement as to how iRobot would use their data.
According to the company's CEO in a post on LinkedIn: "Participants are informed and acknowledge how the data will be collected."
But after MIT Technology Review published its investigation, people who'd previously participated in iRobot's testing environments reached out. According to several of them, they felt misled.
Today, on the Lock and Code podcast with host David Ruiz, we speak with the investigative reporter of the piece, Eileen Guo, about how all of this happened, and about how, she said, this story illuminates a broader problem in data privacy today.
"What this story is ultimately about is that conversations about privacy, protection, and what that actually means, are so lopsided because we just don't know what it is that we're consenting to."
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Jan 15, 2023
Fighting tech’s gender gap with TracketPacer
Sunday Jan 15, 2023
Sunday Jan 15, 2023
Last month, the TikTok user TracketPacer posted a video online called “Network Engineering Facts to Impress No One at Zero Parties.” TracketPacer regularly posts fun, educational content about how the Internet operates. The account is run by a network engineer named Lexie Cooper, who has worked in a network operations center, or NOC, and who’s earned her Cisco Certified Network Associate certificate, or CCNA.
In the video, Cooper told listeners about the first spam email being sent over Arpanet, about how an IP address doesn't reveal that much about you, and about how Ethernet isn't really a cable—it's a protocol. But amidst Cooper's bite-sized factoids, a pair of comments she made about something else—the gender gap in the technology industry—set off a torrent of anger.
As Cooper said in her video:
“There are very few women in tech because there’s a pervasive cultural idea that men are more logical than women and therefor better at technical, 'computery' things.”
This, the Internet decided, would not stand.
The IT industry is “not dominated by men, well actually, the women it self just few of them WANT to be engineer. So it’s not man fault," said one commenter.
“No one thinks it’s because women can’t be logical. They’re finally figuring out those liberal arts degrees are worthless," said another.
“The women not in computers fact is BS cuz the field was considered nerdy and uncool until shows like Big Bang Theory made it cool!” said yet another.
The unfortunate reality facing many women in tech today is that, when they publicly address the gender gap in their field, they receive dozens of comments online that not only deny the reasons for the gender gap, but also, together, likely contribute to the gender gap. Nobody wants to work in a field where they aren't taken seriously, but that's what is happening.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Cooper about the gender gap in technology, what she did with the negative comments she received, and what, if anything, could help make technology a more welcoming space for women. One easy lesson, she said:
"Guys... just don't hit on people at work. Just don't."
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Saturday Dec 31, 2022
Why does technology no longer excite?
Saturday Dec 31, 2022
Saturday Dec 31, 2022
When did technology last excite you?
If Douglas Adams, author of The Hitchhiker's Guide to the Galaxy, is to be believed, your own excitement ended, simply had to end, after turning 35 years old. Decades ago, at first writing privately and later having those private writings published after his death, Adams had come up with "a set of rules that describe our reactions to technologies." They were simple and short:
- Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
- Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you're thirty-five is against the natural order of things.
Today, on the Lock and Code podcast with host David Ruiz, we explore why technology seemingly no longer excites us. It could be because every annual product release is now just an iterative improvement from the exact same product release the year prior. It could be because just a handful of companies now control innovation. It could even be because technology is now fatally entangled with the business of money-making, and so, with every one money-making idea, dozens of other companies flock to the same idea, giving us the same product, but with a different veneer—Snapchat recreated endlessly across the social media landscape, cable television subscriptions "disrupted" by so many streaming services that we recreate the same problem we had before.
Or, it could be because, as was first brought up by Shannon Vallor, director of the Centre for Technomoral Futures in the Edinburgh Futures Institute, that the promise of technology is not what it once was, or at least, not what we once thought it was. As Vallor wrote on Twitter in August of this year:
"There’s no longer anything being promised to us by tech companies that we actually need or asked for. Just more monitoring, more nudging, more draining of our data, our time, our joy."
For our first episode of Lock and Code in 2023—and our first episode of our fourth season (how time flies)—we bring back Malwarebytes Labs editor-in-chief Anna Brading and Malwarebytes Labs writer Mark Stockley to ask: Why does technology no longer excite them?
Tune in today.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Dec 18, 2022
Chasing cryptocurrency through cyberspace, with Brian Carter
Sunday Dec 18, 2022
Sunday Dec 18, 2022
On June 7, 2021, the US Department of Justice announced a breakthrough: Less than one month after the oil and gas pipeline company Colonial Pipeline had paid its ransomware attackers roughly $4.4 million in bitcoin in exchange for a decryption key that would help the company get its systems back up and running, the government had in turn found where many of those bitcoins had gone, clawing back a remarkable $2.3 million from the cybercriminals.
In cybercrime, this isn't supposed to happen—or at least it wasn't, until recently.
Cryptocurrency is vital to modern cybercrime. Every recent story you hear about a major ransomware attack involves the implicit demand from attackers to their victims for a payment made in cryptocurrency—and, almost always, the preferred cryptocurrency is bitcoin. In 2019, the ransomware negotiation and recovery company Coveware revealed that a full 98 percent of ransomware payments were made using bitcoin.
Why is that? Well, partly because, for years, bitcoin received an inflated reputation for being truly "anonymous," as payments to specific "bitcoin addresses" could not, seemingly, be attached to specific persons behind those addresses. But cryptocurrency has matured. Major cryptocurrency exchanges do not want their platforms to be used to exchange stolen funds into local currencies for criminals, so they, in turn, work with law enforcement agencies that have, independently, gained a great deal of experience in understanding cybercrime. Improving the rate and quality of investigations has also been the advancement of technology that actually tracks cryptocurrency payments online.
All of these development don't necessarily mean that cybercriminals' identities can be easily revealed. But as Brian Carter, senior cybercrimes specialist for Chainalysis, explains on today's episode, it has become easier for investigators to know who is receiving payments, where they're moving it to, and even how their criminal organizations are set up.
"We will plot a graph, like a link graph, that shows [a victim's] payment to the address provided by ransomware criminals, and then that payment will split among the members of the crew, and then those payments will end up going eventually to a place where it'll be cashed out for something that they can use on their local economy."
Tune in to today's Lock and Code podcast, with host David Ruiz, to learn about the world of cryptocurrency forensics, what investigators are looking for in reams of data, how they find it, and why it’s so hard.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Dec 04, 2022
Security advisories are falling short. Here’s why, with Dustin Childs
Sunday Dec 04, 2022
Sunday Dec 04, 2022
Decades ago, patching was, to lean into a corny joke, a bit patchy.
In the late 90s, the Microsoft operating system (OS) Windows 98 had a supportive piece of software that would find security patches for the OS so that users could then download those patches and deploy them to their computers. That software was simply called Windows Update.
But Windows Update had two big problems. One, it had to be installed by a user—if a user was unaware of Windows Update, then they were also likely unaware of the patches that should be deployed to Windows. Two, Windows Update did not scale well because corporations that were running hundreds of instances of Windows had to install every update and they had to uninstall any patches issued by Microsoft that may have broken existing functionality.
That time-sink proved to be a real obstacle for systems administrators because, back in the late 90s, patches weren't scheduled. They came when they were needed, and that could be whenever Microsoft learned about a vulnerability that needed to be addressed. Without a schedule, companies were left to react to patches, rather than plan for them.
So, from the late 90s to the early 2000s, Microsoft standardized its patching process. Patches would be released on the second Tuesday of each month. In 2003, Microsoft formalized this process with Patch Tuesday.
Around the same time, the United States National Infrastructure Advisory Council began researching a way to communicate the severity of discovered software vulnerabilities. What they came up with in 2005 was the Common Vulnerability Scoring System, or CVSS. CVSS, which is still used today, is a formula that people rely on to assign a score from 1 to 10, 10 being the highest, to determine the severity of a vulnerability.
Patch Tuesday and CVSS are good examples of what happens when people come together to fix a problem with patching.
But as we discuss in today's episode of the Lock and Code podcast with host David Ruiz, patches—both in effectiveness and education—are backsliding. Companies are becoming more tight-lipped about what their patches do, leaving businesses in the dark about what a patch addresses and whether it is actually critical to their own systems.
Our guest Dustin Childs, head of threat awareness for Trend Micro Zero Day Initiative (ZDI), explains the consequences of such an ecosystem.
"If you're not getting the right information about a vulnerability or a group of vulnerabilities, you might spend your resources elsewhere and that vulnerability that you didn't think was important becomes very important to you, or you're spending all of your time and, and energy on."
Tune in today.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Nov 20, 2022
Threat hunting: How MDR secures your business
Sunday Nov 20, 2022
Sunday Nov 20, 2022
A cyberattack is not the same thing as malware—in fact, malware itself is typically the last stage of an attack, the punctuation mark that closes out months of work from cybercriminals who have infiltrated a company, learned about its systems and controls, and slowly spread across its network through various tools, some of which are installed on a device entirely by default.
The goal of cybersecurity, though, isn't to recover after an attack, it's to stop an attack before it happens.
On today's episode of the Lock and Code with host David Ruiz, we speak to two experts at Malwarebytes about how they've personally discovered and stopped attacks in the past and why many small- and medium-sized businesses should rely on a newer service called Managed Detection and Response for protecting their own systems.
Many organizations today will already be familiar with the tool called Endpoint Detection and Response (EDR), the de facto cybersecurity tool that nearly every vendor makes that lets security teams watch over their many endpoints and respond if the software detects a problem. But the mass availability of EDR does not mean that cybersecurity itself is always within arm's reach. Countless organizations today are so overwhelmed with day-to-day IT issues that monitoring cybersecurity can be difficult. The expertise can be lacking at a small company. The knowledge of how to configure an EDR tool to flag the right types of warning signs can be missing. And the time to adequately monitor an EDR tool can be in short supply.
This is where Managed Detection and Response—MDR—comes in. More a service than a specific tool, MDR is a way for companies to rely on a team of experienced analysts to find and protect against cyberattacks before they happen. The power behind MDR services are its threat hunters, people who have prevented ransomware from being triggered, who have investigated attackers’ moves across a network, who have pulled the brakes on a botnet infection.
These threat hunters can pore over log files and uncover, for instance, a brute force attack against a remote desktop protocol port, or they can recognize a pattern of unfamiliar activity coming from a single account that has perhaps been compromised, or they can spot a ransomware attack in real time, before it has launched, even creating a new rule to block an entirely new ransomware variant before it has been spotted in the wild. Most importantly, these threat hunters can do what software cannot, explained Matt Sherman, senior manager of MDR delivery services. They can stop the people behind an attack, not just the malware those people are deploying.
"Software stops software, people stop people."
Today, we speak with Sherman and MDR lead analyst AnnMarie Nayiga about how they find attacks, what attacks they've stopped in the past, why MDR offers so many benefits to SMBs, and what makes for a good threat hunter.
You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Nov 06, 2022
How student surveillance fails everyone
Sunday Nov 06, 2022
Sunday Nov 06, 2022
Last month, when Malwarebytes published joint research with 1Password about the online habits of parents and teenagers today, we spoke with a Bay Area high school graduate on the Lock and Code podcast about how she spends her days online and what she thinks are the hardest parts about growing up with the Internet. And while we learned a lot in that episode—about time management, about comparing one's self to others, and about what gets lost when kids swap in-person time with online time—we didn't touch on an increasingly concerning issue affecting millions of children and teenagers today: Student surveillance.
Nailing down the numbers on the use of surveillance technologies in schools today is nearly impossible, as the types and the capabilities of student surveillance software are many.
There’s the surveillance of students’ messages to one another in things like emails or chats. There’s the surveillance of their public posts, on platforms like Twitter or Instagram. There are even tools that claim they can integrate directly with Google products, like Google Docs, to try to scan for worrying language about self-harm, or harm towards others, or drug use. There's also surveillance that requires hardware. Facial recognition technology, paired with high-resolution cameras, is often sold with the promise that it can screen school staff and visitors when they approach a building. Some products even claim to detect emotion in a person’s face. Other software, when paired with microphones that are placed within classrooms, claims to detect “aggression.” A shout or a yelp or a belting of anger would, in theory, trigger a warning from these types of monitoring applications, maybe alerting a school administrator to a problem as it is happening.
All of these tools count when we talk about student surveillance, and, at least from what has been publicly reported, many forms are growing.
In 2021, the Center for Democracy and Technology surveyed teachers in K through 12 schools and simply asked if their schools used monitoring software: 81 percent said yes.
With numbers like that, it'd be normal to assume that these tools also work. But a wealth of investigative reporting—upon which today's episode is based—reveals that these tools often vastly over-promise their own results. If those promises only concerned, say, drug use, or bullying, or students ditching classes, these failures would already cause concern. But as we explore in today’s episode, too many of schools buy and use this software because they think it will help solve a uniquely American problem: School shootings.
Today’s episode does not contain any graphic depictions of school shootings, but it does discuss details and the topic itself.
Sources:
School Surveillance Zone, The Brennan Center for Justice at NYU
Student Activity Monitoring Software Research Insights and Recommendations, Center for Democracy and Technology
With Safety in Mind, Schools Turn to Facial Recognition Technology. But at What Cost?, EdSurge
RealNetworks Provides SAFR Facial Recognition Solution for Free to Every K-12 School in the U.S. and Canada, RealNetworks
Under digital surveillance: how American schools spy on millions of kids, The Guardian
Facial recognition in schools: Even supporters say it won't stop shootings, CNET
Aggression Detectors: The Unproven, Invasive Surveillance Technology Schools Are Using to Monitor Students, ProPublica
Why Expensive Social Media Monitoring Has Failed to Protect Schools, Slate
Tracked: How colleges use AI to monitor student protests, The Dallas Morning News
Demonstrations and Protests: Using Social Media to Gather Intelligence and Respond to Campus Crowds, Social Sentinel
New N.C. A&T committee will address sexual assault, Winston-Salem Journal
BYU students hold ‘I Can’t Breathe’ protest on campus, Daily Herald
Thrown bagels during MSU celebration lead to arrests, Detroit Free Press
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Sunday Oct 23, 2022
A gym heist in London goes cyber
Sunday Oct 23, 2022
Sunday Oct 23, 2022
A thief has been stalking London.
This past summer, multiple women reported similar crimes to the police: While working out at their local gyms, someone snuck into the locker rooms, busted open their locks, stole their rucksacks and gym bags, and then, within hours, purchased thousands of pounds of goods. Apple, Selfridges, Balenciaga, Harrod's—the thief has expensive taste.
At first blush, the crimes sound easy to explain: A thief stole credit cards and used them in person at various stores before they could be caught.
But for at least one victim, the story is more complex.
In August, Charlotte Morgan had her bag stolen during an evening workout at her local gym in Chiswick. The same pattern of high-price spending followed—the thief spent nearly £3,000 at an Apple store in West London, another £1,000 at a separate Apple store, and then almost £700 at Selfridges. But upon learning just how much the thief had spent, Morgan realized something was wrong: She didn't have that much money in her primary account. To access all of her funds, the thief would have needed to make a transfer out of her savings account, which would have required the use of her PIN.
"[My PIN is] not something they could guess... So I thought 'That's impossible,'" Morgan told the Lock and Code podcast. But, after several calls with her bank and in discussions with some cybersecurity experts, she realized there could be a serious flaw with her online banking app. "But the bank... what they failed to mention is that every customer's PIN can actually be viewed on the banking app once you logged in."
Today on the Lock and Code podcast with host David Ruiz, we speak with Charlotte Morgan about what happened this past summer in London, what she did as she learned about the increasing theft of her funds, and how one person could so easily abuse her information.
Tune in today to also learn about what you can do to help protect yourself from this type of crime.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)