First, lets pull up a definition of sarcasm.
It's understandable why the NSA would want to avoid lawsuits by filtering out everyone who didn't actually MEAN their threats. But, the difficulty with machines recognizing sarcasm comes in two parts:
One, sarcasm relies on the context in which it is made.
This is an easy one. No one wants to have to wait for their food, right? We construe that as a negative thing. So, someone acting excited or happy about their food being late is being sarcastic.
However, sometimes it can be more difficult. If I say "Wow, great new plan by the NSA," if you don't know specifically which plan I'm referring to, my opinion on or views previously expressed about the issue, or even what community or scenario I'm speaking in, you might have a very hard time telling whether or not I'm being sarcastic. Then tie in allusions I make to other situations or pieces of media that need the same amount of context to be understood. It's not looking GREAT for the NSA right now.
Two, many people like their sarcasm as subtle as possible.
Some trolls will a satirical account where they try and make as many people angry as possible through their content to get views. Meticulous troll sarcasm can be difficult even for human users to spot, but for machines, it can be downright impossible, as they wouldn't use #hashtags, italics, WAAAAYYY OVEREMPHASIZING, or other clues the more sarcasm sincere might present. Imagine Jonathan Swift's " A Modest Proposal." An important piece of satire, but how many HUMANS thought it was real when it came out?
This isn't to say that a computer CAN'T find sarcasm on the internet, but it's harder than it might look at first glance.
A couple of projects like What Does The Internet Think and SyFy's Twitter Popularity index have already attempted "mining the sentiments" of social media. As this awesome article states, it's complicated, and while SyFy doesn't have the authority to arrest people if its program goes wrong, that's not the case for high-stakes web-terrorist stakeouts. As seen above, going by sentiment mining alone, half of internet users are neo-nazis. (Note: This demographic is only accurate for youtube commentators so far as I can tell. Sigh.)
The NSA can sink its money into making a complex, brilliant program that searches phrases and hashtags and formatting, tracks the sentimental tweets of users and their internet searches to determine their views and opinions, connect media references to their sources and the context involved there, link accusations of trolling to offenders and evaluate their veracity. They could sponsor the creation of an incredibly intelligent machine, and that is why I've included their claims as my feature today. I want this program to be made. But I don't want it in the hands of the NSA.
In Cory Doctorow's Little Brother, he discusses the potential implications of a world where your every move and search is tracked, and analyzed for threats. Even as the tracking technology of the government improved, there were still so many false positives and negatives. .01% of a billion is still a huge number of people being put on the blacklist. And people kept finding new ways to trick the system. If the NSA gets its sarcasm detector and even one threat is missed, even one comment misconstrued, the first protest will be from those who make their sarcasm more complicated, harder to detect. So the NSA makes a new program. It's a battle without a winner.
So don't let the silliness of such a request fool you. It CAN be done, and it WILL be dangerous to the neutrality and safety of posting even stupid things on the internet. This battle hasn't come to a front yet, but if you're interested, there are other battles being waged right now. Start with this petition on net neutrality. And please, whatever sarcasm you choose to use, don't threaten violence where it could be misconstrued. If not for the NSA's peace of mind, then for mine, and the 7 billion other people on earth who might take you seriously and react accordingly. Stay free, internet, and I'll see you next week!