It’s getting nasty out there for researchers and companies alike when it comes to disclosure of security vulnerabilities.
On the one hand we have companies like Apple and HID RFID tags (see here and here) who have been very aggressive about protecting their brand and image by putting the pressure on researchers disclosing vulnerabilities in their products. David Maynor today presented a detailed account of his research and dealings with Apple regarding the wireless vulnerability disclosed at last year’s BlackHat. Some pretty interesting reading about these situations.
On the other end of the spectrum we have some questionable situations where corporate entities are throwing their vulnerability research weight around for more than customer and general marketing benefits.
I’m no lawyer, don’t claim to be one, and don’t even play one on TV. But it seems to me that we do all of us a disservice by not having an industry standard for vulnerability disclosure. I would think having an established industry standard practice, even a de facto one, would give researchers and their companies some baseline to claim they are following for; 1) researching and confirming vulnerabilities, 2) disclosing them to the impacted company, and 3) disclosure to the broader public. Right now it’s the wild, wild west and every gun for themselves.
Having an industry standard practice won’t act as an impenetrable shield but act as a guide for researchers and companies to know they are in line with how vulnerability research and disclosure is to be performed.
As it stands now, researchers and their companies are in a very precarious position; disclose and risk punitive action, keep their research results in the shadows, or roll the dice and see what happens. At the same time other Yellow Hats will use their vulnerability research to bash competitors about the head and shoulders, leaving a potentially shrinking middle ground of companies with enough financial backing to continue disclosing vulnerabilities.
The current situation is untenable and stands to erode into a haven for layers, lawsuits and researchers with damaged reputations and careers. No one benefits from creating an industry where the only ones willing to research vulnerabilities are those who profit from their nefarious use.
We as an industry need to get together and define vulnerability research and disclosure best practices while this is still an area people and companies are willing to risk continuing to contribute.
You remember Metcalfe, ’63 Yankees, pitcher. You know…oh, ah, wrong guy.
I mean Bob Metcalfe! The guy we all know as the creator of Ethernet while he was at Xerox PARC in 1973. Bob Metcalfe was inducted into the National Inventors Hall of Fame earlier this month. Congratz, Bob.
First let me say that I think this is a significant event for all of us. The National Inventors Hall of Fame recognizes some pretty serious inventions in our history. But more importantly, it recognizes the people who made those contributions. Some notables in the Inventors Hall of Fame are Alexander Graham Bell, Orville and Wilbur Wright, and Guglielmo Marconi.
My first exposure to Ethernet was in 1987 when we built an Ethernet network with Sun workstations for developing new code. (Yep, dating myself again here, aren’t I.) 3mb data transfer speed, coax cable, BNC style connectors, vampire taps. We had to use a bridge (not a router or a brouter, but a bridge) to connect portions of the network in our building. I also had some exposure to 3Com’s Ethernet products for IBM PCs. That was back when 3Com sold you everything; coax, cards, and the file server.
I followed a few of the contributors to Xerox PARC for some time. In addition to Metcalfe, I was also a big Alan Kay fan. Alan was later an Apple fellow where he continued his research into the Apple user interface and concepts like digital avatars. A lot of what I do today in product design are from things I learned following Alan Kay.
A lot of great things came out of Xerox PARC, thanks to the people doing the research and inventing there, and people like Metcalfe, Warnock, Kaye and Jobs (no, he didn’t work there) who brought PARC inventions to the light of day.
Now, not all of Metcalfe’s pronouncements were spot on or something I agree with. Bob wasn’t a supporter of open source and thought Windows 2000 would make open source irrelevant. One of Alan Kay’s sayings applies here. He compared research to playing baseball. .300+ is a good hitting average and that’s the same for research. (My paraphrase of Alan’s comments on this.)
Ethernet, graphical user interfaces, WYSIWYG design, the mouse, object-oriented programming, InterPress (later became PostScript), SmallTalk (still one of my favorite languages and environments), Dynabook (precursor concept of the laptop), Logo, InterLisp, were just a few of the things that came out of PARC.
I could go on and on about inventors of the network and security technologies we use today. All of this helps you realize that we really do stand on the shoulders of giants. Once in a while it is good to reflect back on the contributions that we so heavily rely on but take so much for granted. To sum it up in one word, “thanks”.
A co-worker and very good friend of mine, John Curry, picked up his digital pen and started a new blog at http://www.village-elder.com, Please check it out – I know you will find it useful. Here’s why.
He’s just a good guy who’s unbelievably smart and loves to help other people.
Kudos on starting the new blog, John. I know many will come to rely on your information, thoughts and ideas. – Mitchell
Two interesting business deals announced on the same day today. One, an up and comer that many say has very interesting technology. The other, a product very long in the tooth and with only a limited few, but large, customers.
Many have admired ConSentry’s proprietary chips for network traffic content inspection. They have evolved this into a post-connect NAC product (with some pre-connect functionality bolted in). A hardware solution does fit Alcatel-Lucent’s product model but they’ll need a much richer pre-connect component to remain competitive. Interesting business OEM deal and certainly one to watch to see who much traction it gains them.
On the other end of the spectrum we have Patchlink acquiring the Harris STAT vulnerability scanner product. For years Harris STAT has been the product to replace in federal government accounts. STAT was one of the first vulnerability scanner products, requiring endpoint credentials, and hasn’t kept pace with the evolution of the vulnerability management market.
It is a bit puzzling trying to understand the Patchlink – STAT deal. Was it for customers? Probably not, given the limited and shrinking STAT customer base. Maybe it was to acquire the STAT database of vulnerability rules? It’s really hard to say. And harder to understand the importance of this deal or how it advances the game for Patchlink.
My friend and blog reader Brad Rich pointed me to an upcoming SANS seminar on Vista security. If you don’t know about SANS, you absolutely should – they are a great resource for learning about and keeping current in network security.
I’ve not listened to this SANS Tool Talk on Vista but SANS stuff is usually quality materials and competent presenters and instructors. If you aren’t familiar with the Tool Talk series, here’s a description from SANS:
SANS Tool Talks are an opportunity for you to hear from Information Security Vendors. At SANS we believe that you cannot accomplish Information Security tasks without tools. A surprising number of security professionals have no idea what technology is available in the marketplace. Tool Talks are designed to give you a solid understanding of a problem, and how a vendor’s commercial tool can be used to solve or mitigate that problem.
The online seminar is tomorrow, Feb. 27 at 1:00 p.m. EST. See the link above for more details about the webcast.
Thanks for the tip, Brad.
During The Converging Minute segment on podcast episode 32, I discussed some of the changes open source is undergoing.
Like most legends and myths, open source has its share of both. We all romanticize about the programmer or security engineer out of their own need writes some software or scripts, shares those with others and eventually starts up an open source project on Source Forge. As the project grows others join in, and if it’s useful the project begins gaining critical mass of developers and users. Bug reports, code fixes and new functionality are contributed by dozens, maybe even hundreds or thousands of community members working together across the net to improve and advance the project. This network of community activists join together to work cooperatively to foster an open community of many developers and many more users of the technology.
That’s the persona open source conjures up for many of us, what I’d call a traditional definition of open source, much like that describe in the book Cathedral and the Bazaar. Many want to hold on to this view of open source even though the reality of many projects is there are only a very contributors and even fewer developers. Some are tightly controlled and don’t let much or any outside code into the project, to the point of re-assigning the copyright of contributions to the owners of the project. I’m not saying all open source projects are this way but I have personally seen this happen.
But open source means many more things today. Many argue for example whether or not you must use an OSI approved license (like the GPL, BSD, Apache, etc.) in order to call your code or product open source. Maybe just meeting the OSI requirements is good enough. Or requiring there be a CVS source code tree or ship source with the product will do. Commercial open source products may offer their product under a private hybrid license, or use something like the GPL and layer on additional license restrictions. The latter can be confusing because it may not be obvious that there are additional restrictions, giving the community the impression that it’s run-of-the-mill GPL when that isn’t always the case. This also helps perpetuate the myth that only GPL licensed software is open source.
When it comes to commercialized open source options, I believe you really need to think about the business goals and the community. Are you really working to attract a large number of developers and contributors to the software? Or are you looking to attract free users of the software, and provide other commercial options for additional functionality or products.
It’s perfectly fine to offer products as open source, whether a private or OSI license, and then have other options for commercial or for profit use. It’s not a religious debate. Open source isn’t a trademarked term or industry standard that everyone has to rigidly adhere to. The opportunity to do that is long past. I do believe in order to call yourself an open source product (not just one that uses open source) you must provide source code in a very accessible manner to those interested, for whatever the reason.
What’s most important to me is being very clear with your users; what can you do for free, and when you have to pay something to use the product or additional features. And if you call it open source, you must provide your source code for the free components. And be very clear with your users what things cost money. You are a company, users understand you have to make money. Just don’t try to fool them or be sly about how you do that.
That’s what users care about. Don’t hide behind fancy terms, small print or make the user hunt down what the license terms are. Make it crystal clear, in plain language the user can understand. Most people, unless they have a bone to pick about rigid adherence to the OSI definition, just want to know under what conditions can they use the software and when do they have to pay something. They’ll figure out if the licensing works for them, is too restrictive or just plain confusing.
If the user must consult the lawyers to use free software, then it’s not clear.
Note: Alan also blogged about the topic vendors using GPL and layering on license restrictions. See Alan’s post GPL or other open license, what is the difference?.
And now… Heeeeeeerrrse Michael! Well, Michael Santarcangelo, anyway, thought leader behind the Security Catalyst community and his own company The Michael Angelo Group. Michael joins Alan and me on podcast episode 32 to learn the story behind why Michael is putting the effort into creating a community of trusted security thought leaders and the environment to support those working in security today, both newbie and experienced. You can also find Michael’s blog and podcast here.
In this podcast episode of The Converging Minute I discuss the changes happening within open source software and how that impacts networking, security, and my ideas like the Unified Network Platform (pdf).
Alan and I have another rousing discussion during This Week In Security where we discuss the Month of Vulnerabilities, Vista bashing by Symantec and others, my pronouncement of Yellow Hats, and Cisco revolving door decisions about whether the CTA agent will be open sourced or not (don’t count on it btw.)
Join us on the podcast for a fun and lively discussion. Also, Alan puts another trophy on his fireplace mantle (do they have those in south Florida?) by winning the flag football championship coaching his son’s team, so send him an email congratulating him on that.
Please send along any questions, comments and suggestions to firstname.lastname@example.org. As alwasys, thanks for listening to the podcast and for reading our blogs.
Just when I thought I’d laid this Yellow Hat idea down for the weekend, what do I come across but an article asking the same thing regarding Vista UAC issues. Brett Thomas, writer at Bit-Tech, discusses the ongoing news and analysis of Vista’s UAC deficiencies. One of the things he calls into question is whether Symantec has gone overboard and is too zealous in their disclosures.
Symantec illustrated the process, created a flowchart and even wrote and provided code showing how the flaw can be exploited via “calling rogue DLL files whilst using an unrelated legacy process”. (I’m not sure I follow what that means – how does a new Vista operating system have legacy processes? Not my area of research. Anyway….)
He calls into question if it was appropriate that Symantec take it this far when its not research that could be turned into an AV signature or some product feature. (My interpretation.) So let me say what I think he’s trying to say. Now I don’t know this to be true but…
Is it possible Symantec is doing this to undermine Microsoft’s positioning of Vista as a much more secure operating system? Similar to other efforts by John Thompson saying that Vista isn’t a “security solution” as Microsoft claims when that isn’t how Microsoft has positioned Vista.
It’s possible. Warning…Could be a yellow hat here folks.
Interested in a “No Vulnerability Pimps” t-shirt? Send me an email.
We have white hats and we have black hats. Everyone understands their purpose, their roles, and their motives. One intends to better the security of others through responsible research and disclosure, the other intends to better their own pocketbooks or create victims of their efforts. The first is for good, the latter is bad.
But there is a third category. Unacknowledged, unnamed and largely whose motives walk a fine line between serving customers and serving their own ambitions. Yellow Hats.
Yellow Hats. I choose the color yellow because of it’s dual meaning. Yellow, because they serve as the “canary in the bird cage” within the coal mine of security to foretell imminent danger of security threats and vulnerabilities. Protecting their customers by providing rapid AV, phishing, IDS and vulnerability scanner and other product updates.
But yellow also has a second, more sinister meaning. While on the surface intentions may seem to serve the greater good, some seek to pervert the purpose for other more nefarious, self-serving needs. Marketing. Taking white hat research and twisting it to a purpose driven for greed, and even to harm and impugn others. Yellow because they hide behind the mask of security research and exploit findings to mar or scar competitors and enemies in market.
Yellow Hats seek to rush news to market and trumpet the news of flaws in competitors’ products. They flaunt news of security risks for which their products do not protect. Not limited to any single vendor, they pretend to serve the greater good, but their edacity pushes them beyond the aims of serving the customer to more self serving ambitions. Beware the Yellow Hat.
They can return, return to the good intentions of White Hats. But it requires that we diligently guide them back into the fold of good deeds.
Symantec is poised to ship Norton 360 next month. It’s basically a Windows OneCare knockoff. AV, anti-spyware, anti-phishing, backup, etc. Same stuff for $79 buck (more than OneCare.)
Maybe it’s a product for existing Symantec customers. But then again, if you’ve run Symantec’s AV products, like I have, you know how big and bloaty they are and how they slow your machine. Maybe a lot of people still buy the yellow box software but I don’t. And based on the beta user experiences I don’t think I’ll be switching to Norton 360 anytime soon.