Remember that one game that refused to run on Gamecache? I decided to disable the onboard nic(Realtek) and order a PCI-E 1X nic for the girls computer. I ordered 2 on eBay for a total of 12 bucks for two of them and they shipped via the boat so they just arrived a couple of days ago. I got the Intel Nic installed and the eldest began with the install of Elder Scrolls Online. The install is absolutely massive. We never got past the first 7% beforehand…now we are completely installed two hours later. The initial install is 85 gigabytes with another 60 gigs in patches. Now the eldest can play all of her games on the Gamecache without stuffing the local ssd…:) It took a bit of research and some troubleshooting but the Gamecache drive is now 100% operational for all of her games…and I am stoked about this too..:) I will make one more post about the full list of games she has on the gamecache once she gets done installing everything. When it’s all said and done she will be chewing up just over 550 gigs of storage…which is less than 25% of the 4TB I had available. Needed to do something with that space..:)
So i ordered a couple of pci-e network cards one for my machine and one for the girls tower. Let’s see if that helps the eldest’s one game not crash the entire networking stack…:) Realteks are decent if you aren’t doing heavy networking stuff like I am doing..it will take a couple of weeks for them to come in(ordered from mainland china on the boat).
Now for the Gamecache update. I really didn’t need to do this..but i figured why not? I moved my battle.net games(which are the only ones i play…being SC2 and D3)…to my own iscsi target on the FreeNAS. It’s a total of 40 gigs…but it’s 40 gigs i do not have to download again when i scrub my computer(which I am going to be doing soon as I am having weird video issues…had similar issues on the girls tower and a scrub fixed them).
My workstation actually runs nearly everything off the FreeNAs in one way or another. My Douments, Music, videos, Downloads actually run from an SMB share. I use most of the files myself and there’s many that are used to rebuild internal machines when they get scrubbed. What does the gamecache give me that the smb share doesn’t? First, is speed. SMB is not multi-threaded so it does not take advantage of the multiple cores i have going. Secondly I do not want anyone else possibly messing up the game files.
Everything has ZFS snapshots going as well so if something goes boom..i can just rollback that dataset or even individual files…:) Kinds neat..:)
So far my eldest has all of her games on the gamecache drive. Unfortunately ElderScrolls Online refused to run on the Gamecache drive. It would hang up the network card causing all network connectivity to go away. Only a restart of the machine would fix it. However the fact that all of the other games are on the gamecache drive means the monster game now fits on her local storage without running out of local space. I’ll troubleshoot the problems later(it think I need to put in an Intel NIC). I will try a few things once I can order the new nic.
It is working out better than I anticipated. Right now the eldest has been installing all of her games on the G drive. I found out one of her games is nearly 200 Gigs in size. Holy crap batman. She can easily chew through more than 400 gigs of storage just with the games she plays. here are the ones that take up the most space.:
- Diablo III
- Starcraft II
- Lord of the Rings Online
- Elder Scrolls Online(this one is the nearly 225 gigabyte monster)
She informed me she has a ton of smaller games that she will be installing now that she has the space. She also asked..what happens if i run out my 1 Terabyte allocation? I told her it is a couple of mouse clicks to add more space. Let’s see if she chews through the whole terabyte..if she does I have 3.2 terabytes waiting..:)
*Game Cache Update* Is the built in compression. For vm’s you can sometimes get 5-10x compression because vms are mostly empty space. ZFS does compression transparently. Right now as part of the eldest’s game cache her system thinks it has written 75 gigabytes of data. ZFS compression has reduced that down to 52.5G in the background. This is roughly a 1.41x reduction in size just from basic compression. Normally with my file types(movies, music..mainly stuff that is already compressed) I do not see any real compression. With her steam apps the compression is much higher. It will be interesting to see if it goes up or down as she loads up the rest of her games.
NAME PROPERTY VALUE SOURCE
Data/Steamcache type volume –
Data/Steamcache creation Tue Nov 19 17:11 2019 –
Data/Steamcache used 52.5G –
Data/Steamcache available 4.17T –
Data/Steamcache referenced 52.5G –
Data/Steamcache compressratio 1.41x –
Data/Steamcache reservation none default
Data/Steamcache volsize 1.00T local
Data/Steamcache volblocksize 128K –
Data/Steamcache checksum on default
Data/Steamcache compression lz4 inherited from Data
Data/Steamcache readonly off default
Data/Steamcache copies 1 inherited from Data
Data/Steamcache refreservation none default
Data/Steamcache primarycache all default
Data/Steamcache secondarycache all default
Data/Steamcache usedbysnapshots 0 –
Data/Steamcache usedbydataset 52.5G –
Data/Steamcache usedbychildren 0 –
Data/Steamcache usedbyrefreservation 0 –
Data/Steamcache logbias latency default
Data/Steamcache dedup off default
Data/Steamcache mlslabel –
Data/Steamcache sync disabled inherited from Data
Data/Steamcache refcompressratio 1.41x –
Data/Steamcache written 52.5G –
Data/Steamcache logicalused 74.0G –
Data/Steamcache logicalreferenced 74.0G –
Data/Steamcache volmode default default
Data/Steamcache snapshot_limit none default
Data/Steamcache snapshot_count none default
Data/Steamcache redundant_metadata all default
Data/Steamcache org.freebsd.ioc:active yes inherited from Data
I made an earlier post about an experiment I am running. So far so good. The eldest is having to put her games onto the new G drive her computer sees. The magic of ISCSI makes it appear as a local hard drive even though it’s on a network server. I am a HUGE fan of ISCSI and I use it as much as I can…especially when the storage is Linux or UNIX. I did notice that the transfer was maxing out at 650 megabit/second…i know that the machine can do better..it used to do 2 gigabits/second when it was a backup target. I wondered what has changed throughout the years? I did a little bit of digging. ZFS is all about data safety. You have to be extremely determined to make it loose data for it to have a chance to do so. sometimes that ultimate safety comes at the price of performance. I started looking at the numbers and i noticed ram(32 gigs) was not a problem. CPU usage was less than 20% max. The disks however were maxed out. Well it turns out that ZFS has a ZIL(ZFS Intent Log) that is always present. If there is no ZIL SSD then it’s on the main drives. I thought that double(or in this case triple) writing to the drives was it…but nope..no there. I had to dig deeper and dug into the actual disk I?O calls. It turns out that the default setting for synchronous writes defaults to the application level. If the application says you must write synchronously…that means zfs will not report back that the write transaction was completed until it does both of it’s copies and verifies them on the array. Loosely translated if you were to put this in RAID terms it would be a write-through. Since ZFS is a COW filesystem I am not concerned about data getting corrupted when written..it won’t(again unless you have built it wrong, configured it wrong…something like that)…so I found a setting and i disabled the forcing of synchronous writes. I effectively turned my FreeNAS into a giant write-back caching drive. Now the data gets dumped onto the FreeNAS server’s ram and the server says “i have it” and the client moves on to the next task..either another write request or something else. Once I did that the disks went from maxing out at 25% usage to nearly 50% usage and the data transfers maxed out the gigabit connection. That’s how it is supposed to be.
There are times for forcing synchronous writes…like databases, financials….anything where the data MUST verified as written before things are released. that’s when you can force synchronous writes and use a ZIL drive. This is an SSD(typically) that holds the writes as a cache(non-volatile) until the hard disks catch up. The ZIL then grabs the data, verifies it’s integrity, and then tells the application the write has been accomplished(because it has) and then passes those writes to the array as a sequential set of files(something hard drives are much better at than random writes). What’s eve nicer is that you can set the writing behavior per dataset or per zvol. The entire file system doesn’t have to be one or the other and it doesn’t hurt the ZFS filesystem performance. More as I figure it out with the ultimate question being…how do games perform when operated like this…stay tuned.
I came across an interesting use case for FreeNAS. My eldest daughter likes games that are huge. Like 100-250 gigabyte huge. I simply cannot afford to keep adding SSD storage to her machine. I will not do hard disks as main storage..under Windows 10 it’s too painfully slow. What Lawerence had done was taken a FreeNAS machine, sliced off a portion of the raw storage, and presented it to the workstation as a hard drive over his network. His son now run his large games from the FreeNAS zvol as if it was local. What’s neat is the games initial load time is a bit slower(the NAS is hard drive based) but once it’s loaded..there’s no perceptible difference in gaming performance despite a constant stream of data from the server…usually less than 150 megabit/sec. Since I have multi Terabytes of free space i am doing the same thing for my eldest. I am also doing what is called thin provisioning so it initially starts at zero usage and goes up until she reaches her cap of 1 Terabyte. Let’s see how this works as my quad core Xeon cpu is light years faster(with 4 times more ram at 32 gigabytes) than his FreeNAS mini dual core atom and 8 gigs of ram. If this works…i have a new idea for future computer builds here at the house..<G>
Get ready for a long post folks with embedded links. For previous posts please look in the 737 category.
Yesterday’s post gave some details about the 737 MAX issues. Karl’s post yesterday goes into greater detail:
Let me note up front — I’m not a pilot. I am, however, a software and hardware guy with a few decades of experience, including writing quite a lot of code that runs physical “things”, some of them being quite large, complex, expensive and, if something goes wrong, potentially dangerous. Flight isn’t all that complex at its core; it’s simply a dance comprised of lift, gravity, thrust and drag. What makes it complex is the scale and physical limits we wish to approach or exceed (e.g. you want to go how fast, in air how thin, with a distance traveled of how far and with how many people on board along with you as well as with a bunch of other aircraft in the air at the same time?)
The sequence of circumstances that has left the 737MAX to arguably have the worst hull safety rating in the history of commercial jet aviation appears, from what I can figure out reading public sources, to have basically gone something like this:
- The 737, a venerable design with literal millions of flight hours, a nice, predictable handling paradigm and an excellent safety record (the basic design of the hull is 50 years old!) was running into competition resulting from its older-series engines that bypassed less air (and thus are less efficient in their consumption of fuel.) Boeing sought to correct this competitive disadvantage to keep selling new airplanes.
- The means to correct the efficiency problem is to use newer, higher-bypass engines which, in order to obtain their materially lower fuel consumption, are physically larger in diameter.
- The aircraft’s main landing gear has to fit in the space available. To make the larger engines fit the landing gear has to be made longer (and thus larger, bigger and stronger) or the engines will hit the ground when taking off and landing.
- The longer landing gear for where the original design specified the engines to go (but with the larger engines) would not fit in the place where it had to go when it was retracted.
- Boeing, instead of redesigning the hull including wings, tail and similar from the ground up for larger engines, which would have (1) taken quite a lot of time and (2) been very expensive, because (among other things) it would require a full, new-from-zero certification, decided to move the engines forward in their mounting point which allowed them to be moved upward as well, and thus the landing gear didn’t have to be as long, heavy and large — and will fit.
- However, moving the engines upward and forward caused the handling of the aircraft to no longer be nice and predictable. As the angle of attack (that is, the angle of the aircraft relative to the “wind” flowing over it) increased the larger, more-forward and higher mounted engines caused more lift to appear than expected.
- To compensate for that Boeing programmed a computer to look at the angle of attack of the aircraft and have the computer, without notice to the pilots and transparently add negative trim as the angle-of-attack increased.
- In other words instead of fixing the hardware, which would have been very expensive since it would have required basically a whole new airplane be designed from scratch it appears Boeing decided to put a band-aid on the issue in software and by doing so act like there was no problem at all when it fact it was simply covered up and made invisible to the person flying the plane by programming a computer to transparently hide it.
- Because Boeing had gone to a “everything we can possibly stick on the list is an option at extra cost and we will lease that to you on an hours-run basis, you don’t buy it”, exactly as has been done with engines and other parts including avionics in said aircraft, said shift being largely responsible for the rocket shot higher in the firm’s stock price over the last several years, the standard configuration only included one angle-of-attack sensor. A second one, and a warning in the cockpit that the two don’t agree is an extra cost option and was not required for certification! (Update: There is some question as to whether there is one or two, but it appears if there are two physically present the “standard” configuration only USES one at any given time. Whether literally or effectively it appears the “standard” configuration has one.)
- Most of the certification compliance testing and documentation is not done by the FAA any more. It’s done by the company itself which “self-certifies” that everything is all wonderful, great, and has sufficient redundancy and protections to be safe to operate in the base, certified configuration. In short there is no requirement that a third, non-conflicted and competent party look at everything in the design and sign off on it — and thus nobody did, and the plane was granted certification without requiring active redundancy in those sensors.
- Said extra cost option and display was not on either the Lion Air or Ethiopian jets that crashed. It is on the 737MAX jets being flown by US carriers, none of which have crashed.
- It has been reported that the jackscrew, which as the name implies is a long screw that sets the trim angle on the elevator, has been recovered from the Ethiopian crash, is intact and was in the full down position. No pilot in his right mind would intentionally command such a setting, especially close to the ground. It is therefore fair to presume until demonstrated otherwise that the computer put the jackscrew in that position and not the pilot.
- Given where the jackscrew was found, and that there is no reasonable explanation for the pilot having commanded it to be there, why is the computer allowed to put that sort of an extreme negative trim offset on the aircraft in the first place? Is that sort of negative offset capability reasonable under the design criteria for the software “hack-around-the-aerodynamics” issue? Has nobody at Boeing heard of a thing called a “limit switch”?
- It has been reported from public information that both Lion Air and the Ethiopian jet had wild fluctuations in their rate of climb or descent and at the time they disappeared from tracking both were indicating significant rates of climb. For obvious reasons you do not hit the ground if you have a positive rate of altitude change unless you hit a cumulogranite cloud (e.g. side of a mountain or similar), which is not what happened in either case.
- The data source for that public information on rate of climb or descent did not come from radar; while I don’t have a definitive statement on the data source public information makes clear it almost-certainly came from a transponder found on most commercial airliners known as ADS-B. Said transponder is on the airplane itself. It’s obvious that the data in question was either crap, materially delayed or it was indicating insanely wild fluctuations in the aircraft’s vertical rate of speed (which no pilot would cause intentionally) since you don’t hit the ground while gaining altitude and if the transponder was sending crap data that ground observers were able to receive the obvious implication is that the rest of the aircraft’s instruments and computers were also getting crap data of some kind and were acting on it, leading to the crazy vertical speed profile.
- The Lion Air plane that crashed several months ago is reported to have had in its log complaints of misbehavior consistent with this problem in the days before it crashed. I have not seen reports that the Ethiopian aircraft had similar complaints logged. Was this because it hadn’t happened previously to that specific aircraft or did the previous crews have the problem but not log it?
- The copilot on the Ethiopian aircraft was reported to have had a grand total of 200 hours in the air. I remind you that to get a private pilots license in the US to fly a little Cessna, by yourself, in good weather and without anyone on board compensating you in any way you must log at least 40 hours. Few people are good enough to pass, by the way, with that 40 hours in the air; most students require more. To get a bare commercial certificate (e.g. you can take someone in your aircraft who pays you something) you must have logged 250 hours in the US, with at least 100 of them as pilot-in-command and 50 of them cross-country. The “first officer” on that flight didn’t even meet the requirements in the US to take a person in a Cessna 172 single-engine piston airplane for a 15 minute sightseeing flight!
- The odds of the one pilot who actually was a commercial pilot under US rules in the cockpit of the Ethiopian flight having trained on the potential for this single-data-source failure of the aircraft and what would happen if it occurred (thus knowing how to recognize and take care of it) via simulator time or other meaningful preparation is likely zero. The odds of the second putative flight officer having done so are zero; he wasn’t even qualified to fly a single-engine piston aircraft for money under US rules.
So there are some pilot issues with the Ethiopia pilot training in terms of the co pilot. However that doesn’t change the fact that an engine design change without subsequent airframe changes means the aircraft had the large potential to stall itself. the solution was a software package that was able to effectively say F-U to the pilots and do whatever it wanted. the A330 had a similar issue..a hidden system designed to prevent stalling that would freakout if it got bad data from one of the two or three sensors. please read the linked article at the market ticker above from a guy who writes code. the thinking that software can fix everything is a dangerous concept that is killing folks..and will continue to do so. Remember..the automation is made by people who are not perfect. “AI” which is a term that is thrown around without any thought to what it means..is designed by flawed people..it will never be perfect.
Karl has posted an article today that pretty much agrees with me..folks need to go to prison over this. There was plain knowledge that Boeing knew MCAS was trying to fix a flawed airframe design.
This led to Boeing rushing the certification(which never should have passed FAA certification) trying to get the 737 out to complete with the A320. The Seattle Times(which Karl linked to in the above linked post) has some shocking details and IMO there’s no way folks at Boeing didn’t know the design of the 737 MAX and the subsequent MCAS system wasn’t a dangerous combination that inevitably turned deadly.
Quantas flight 72 nearly crashed after one of it’s sensing computers got inconsistent data from it AOA(angle of attack) sensors. It seems that AOA sensors also cause a freekout of the MCAS software system aboard the 737 MAX series. The reason the MAX has MCAS is the engines. The engines on the MAX 737 are larger and because they are unable to fit under the wing they are not only further forward but they are also in a higher mounting position than previous gen 737’s. Frankly, an aircraft that requires a hidden, pilot overriding, system to be certified by the FAA needs to be removed from the airspace and never allowed to return. The FAA seriously dropped the ball when it came to certifying this aircraft. Why Boeing was allowed to do ANY self-certification is unconscionable. Anyone who knew of this issue should be held personally and corporately liable and suffer fines, loss of revenue and prison time. This is the only way(not even hundreds of deaths) will change the behavior that lead to these disasters. Here are a couple of videos about the 737 Max MCAS system:
The 787 was grounded after battery fires and there were no fatalities. While I think the rest of the world grounding or denying airspace to the 737 MAX series is partially anti-Trump, I also think it is partially because for whatever reason two of these brand new jets have crashed within 5 months of each other. What was startling was the FAA refusing to ground the jets in the face of the following factors:
- The new jets had even more software complexity with possibly inadequate documentation and training for the pilots
- Multiple reports of the aircraft having sudden nose down instances requiring intervention by the pilots
- Pilots filing multiple complaints of inadequate documentation and training about the new aircraft’s systems and automation
More has yet to come out but I do agree with Trump telling his FAA administrator to ground the jets after some serious allegations and the proximity of two aircraft of the same type crashing within 6 months of each other. Only after Trump ordered the FAA to ground the jets has Boeing and the airlines now said they are grounding the jets. The entire US aviation industry was doubling down saying the jets were safe. I am not confident in the quality of the software in modern anything…and with civilian jets more and more depending on software I think something like this was bound to happen either in aviation or in automobiles. Unfortunately, it usually takes a number of fatalities to take place before folks listen to the folks who have been talking about the horrid state of software in everything. Maybe now, in high life hazard industries some code quality regulations will be put into place to prevent something like this from happening again. If this is not done, the number of crashes will continue to stack up. It will also be interesting to see if the “findings” are truthful or if yet another coverup of deadly code will occur)The Prius software bug comes to mind here. It was finally outed and Toyota was forced to fix it via a software update instead of blaming the operators of the vehicles.)
Juan Brown gives his take on both Lionair and the Ethiopia crashes:
Two new 737-8MAX aircraft crashing within 8 months of each other. I have been warning folks closest to me about the horrid quality of software these days. Frankly even if i could fly right now I wouldn’t. Self-driving cars? No way. My favorite operating systems(Linux and BSD) are not immune from poor coding. It’s everywhere. I hope the MCAS is not the cause…I was interviewed about the state of Windows updates. Needless to say I was not kind. The state of software today IMO means if you want to depend on automation you are sacrificing lives. Keep in mind when peple are presented with high amounts of automation..they get lulled into a false sense of security…and loose the ability to think critically and react properly. MCAS has not had a spoteless history:
I was interviewed for an article on the state of Windows 10. Windows 10/Server 2016 REALLY needs to have the QA team brought back. Microsoft is seriously affecting the security of their products. I am not the only one holding off on multiple versions(not several as mentioned in the article..but an average of two)…and I have automatic updates disabled on three of my clients that have run into serious issues with Server 2016 updates. IMO Windows10/server 2016 is working its way to becoming another Vista if Microsoft doesn’t either bring back their QA team or start listening to their insiders.
Opinions of Windows 10 run hot and cold for IT experts
By Eddie Lockhart
As 2018 gives way to 2019, opinions of Windows 10 range from praise to disdain.
For some IT pros, the operating system is a big step up from Windows 7, and hiccups such as the October 2018 Update file-deletion problem are just business as usual with any technology.
“Everybody makes mistakes, and [Microsoft] rolled it back in a hurry,” said Willem Bagchus, a messaging and collaboration specialist at United Bank, based in Parkersburg, W.Va. “No company and no technology is infallible.”
For other experts, it’s a sign that Microsoft doesn’t listen to its customers and a gateway to bigger problems.
“Their insiders told them they had a major problem,” said William Warren, owner of Emmanuel Technology Consulting, an IT services company in Brunswick, Md. “Microsoft released [the October 2018 Update] anyway. I guess they figured nobody was going to complain.”
No matter a person’s opinions of Windows 10, the past year was anything but smooth for the operating system, with the last few months in particular making headlines — and not always for the best reasons.
Windows 10 update approach comes under fire
The biggest problem Windows 10 ran into in 2018 was issues with the October 2018 Update, which it released on October 3. Microsoft recalled the update only three days after releasing it because users reported missing files after they updated. The company did not re-release the update until November 14.
“If you try to push [updates] wholesale to everyone, invariably, you’re going to find some problems and you’re going to look bad,” said Jack Gold, principal and founder of J. Gold Associates, a mobile analyst firm in Northborough, Mass.
The entire Windows-as-a-service model, which includes two major feature updates a year and limits the amount of control IT has over who gets what updates and when, has elicited strong opinions of Windows 10 from IT experts since its inception.
“Microsoft decided they know better than everybody,” Warren said. “People are advocating disabling Windows updates and doing everything manually once there’s been a few patch cycles.”
Microsoft relies on customers, particularly Windows Insiders, to serve as testers for updates to ensure that there aren’t bugs or other problems. This puts pressure on IT pros — who have their own systems to worry about — to identify and report any issues. Microsoft’s decision to release updates at such an accelerated pace compared to past versions of Windows means that even as it tries to fix issues, the company continues to add more code and features to the OS.
“They try to fix things in the monthly updates that they’ve screwed up from the biannual updates,” Warren said. “It’s just not a good way to write software, but they’re determined to do it.”
These issues are forcing some IT pros to delay updates. Warren purposely keeps some servers four or five updates behind to prevent his clients from running into issues that cause downtime.
“You need to have more options that allow people to delay if they want to,” Gold said. “Not forever, but maybe one major update back. Something that lets the rest of the world work with the update for a while and see where it goes.”
Security and Windows 10 updates
Regardless of IT pros’ opinions of Windows 10 updates, security is a key concern. The fact that a serious issue, such as the one that deleted user files, got past Microsoft set off alarm bells for Warren.
“[Without] proper testing there’s going to be issues that you won’t know about until the bad guys find them,” he said. “Then you have even more problems.”
The service model approach can have security benefits, however, because it prevents users from working with unpatched versions of the OS, which can be a gateway for attack.
“It’s like putting a seatbelt in a car,” Bagchus said. “Because computers are what they are, the magnitude of the risk of having unprotected computers on the internet cannot be overstated.”
Edge gets an overhaul
One area where opinions of Windows 10 seem in sync is that Microsoft Edge needs changes. In December, Microsoft announced it would discard EdgeHTML as Edge’s code in favor of the open source Chromium. It’s not a major surprise considering Edge — the default browser in Windows 10 — was a distant fourth in terms of usage behind Google Chrome, Mozilla Firefox and Internet Explorer as of November, according to NetMarketShare.
“It’s about time they grabbed somebody’s code that’s standards-based,” Warren said. “Make their own browser, but make it standards compliant.”
One of the problems with Edge today is that it only works with Windows 10 and does not support certain features, such as ActiveX controls and X-UA-Compatible headers. In addition, certain sites don’t load right on Edge, Gold said. Chromium can help eliminate these problems and open the browser up to work on other operating systems, including Apple macOS.
“The Edge compatibility issues have stood in the way of Windows 10 rollouts,” Bagchus said. “This is a step in the right direction for more than just their browser.”
17 Dec 2018
Not that much actually. I looked around here and realized i now only have two machines utilizing a 12/core 24 thread monster server. Because my FreeNAS is unix based like my Linux webservers I do not need to run a linux vm at all. I have my firewall setup to only allow SSH connections from the primary ip addies o each of the three server that are allowed to send their backups here. Since I do not want to simply reconfigure the two towers(mine the the girl’s tower) I am just going to nuke them and set them up standalone. I only have to create 4 accounts. One for each of us and my backup account in case something happens to me so he can get to my important data and other things. I then point FreeNAS to Backblaze B2 and let that be my backup target..:) FreeNAS snapshots will allow me to control versioning and retention so I will be able to control how much data i actually store in the cloud…and it will be pre-encrypted as well. I will be interested to see how much my power draw drops as well. I will go down to one single Dell R310 and then my Unifi networking gear and cable modem.
I have been trying to figure out how I want to change the rack to save power and reduce complexity. Given i am switching away from ETC Maryland i do not need the massive amount of computing power i have sitting upstairs. It just hit me..:) My FreeNAS machine has the capability to hold everything not only from my local home use but also as the backup target for my webservers. I will need to upgrade the amount of storage..but that will be cheaper on the front end than replacing everything. My power draw savings will not be as good(still should be around 50%). it will take a few months to transition things over. What happens to the current r410 and r610? I wll prepare to move them to the datacenter where they will become my primary webserver(R610) and my in datacenter bckup target(R410..fter an upgrade).
m3mkm Davey UK
Only missed one. They don’t tell you which one but I don’t care I passed.
I’m looking to start with a hand unit. Once I get my Own vehicle then i intend to mount a fully sized mobile unit in it..my wife would skin me alive if i mounted a radio in our only car right now..
I FINALLY was able to rangle some time and get into a Ham Radio Technician Element 2. There’s a tons of studying to do. I’ve already jumped in. I can’t wait to get my license so I can finally help with the New Blue Ridge Baptist Association’s Mobile Communications unit. I’ll keep folks posted. I’m looking forward to jumping into this and being able to operate an Ham radio station..:)
Zimbra has been a bit hit. I’m currently trying to get the software lifecycles synchronized. Zimbra 7 has just been released.. unfortunately zimbra does not support Debian any longer. Centos 6 is about to be released and I’m not a fan of Ubuntu. Centos 5 expires in 2014 which is about the same time as Zimbra 7. It looks like I’ll stick with Centos 5 until the EOL of Zimbra 7. Then for Zimbra 8 change both the Zimbra version and the host operating system.
Servers: One of the donated rackmounts is now running Astaro again. Untangle let me down when it counted and the conduct of their founder and COO i find distateful. I ahd a bad e-mail get past the Untangle system and infect one of my users computers. I’ve since switched to Astaro and frankly I couldn’t be happier. Not only has the spam detection gone up to near 99% or higher but false positives are nearly zero. So far the Astaro is rejecting 90% of all spam mail before it even gets to the anti-spam and a/v engines. This has led to a marked decrease in resource usage by the Zimbra server. I honestly had no idea how much was getting by the Untangle until i installed Astaro.
I also had all the ups units in the server room fail. Luckily I was able to get a new single, large ups that’s ultimately capable of running everything in the server room for at least 10 minutes. Once i get the control software installed the main server will be able to send graceful shutdown signals to the mail server and firewall server if there is a sustained power disruption. The file server will also shutdown gracefully meaning less chances of file system crashes or corruptions..:)
There’s a couple of large projects coming but i’m not going to talk about them until everything is in place..:)
Got the new server online months ago..sorry for the lack of news. I wound up sticking with Debian. Everything went smoothly and now there are several domains running off of this box including multiple streaming servers. Now a bigger challenge looms…moving this station to a new location AND hooking everything up to it’s new location. will keep folks posted as I can.
First of all, he’s more precise with his numbers: 340,282,366,920,938,463,463,374,607,431,768,211,456
And he shows us how to say it:
Considering my earlier post about Facebook this isn’t unexpected.
If you aren’t already paranoid enough to remove your address and cell phone number from Facebook, today might be the day. Facebook has decided to give its third-party app developers API access to users’ address and phone numbers as they collectively get more involved in the mobile space, but privacy experts are already warning that such a move could put Facebook users at risk.
In its Developer Blog post, Facebook noted that developers will only be able to access an individual user’s address and phone number—not the info of his or her friends. Additionally, those who want to be able to use that data will have to be individually approved by the users themselves, and those developers must take special care to adhere to Facebook’s Platform Policies, which forbid them from misleading or spamming users.
Despite Facebook’s reassurance that users will have the final say in who gets the info and who doesn’t, it didn’t take long for observers to point out that it will be easy for shady developers to get in on the action. Security research firm Sophos wrote on its blog that rogue Facebook app developers already manage to trick users into giving them access to personal data, and this move will only make things more dangerous.
“You can imagine, for instance, that bad guys could set up a rogue app that collects mobile phone numbers and then uses that information for the purposes of SMS spamming or sells on the data to cold-calling companies,” Sophos senior technology consultant Graham Cluley wrote. “The ability to access users’ home addresses will also open up more opportunities for identity theft, combined with the other data that can already be extracted from Facebook users’ profiles.”
Cluley has a point. Just because app developers agree to follow Facebook’s terms doesn’t mean that they actually do, and many aren’t caught until it’s too late. We learned that much just a few months ago when a number of top Facebook apps were found to be collecting and selling user data against Facebook’s rules. Facebook ended up suspending those developers for six months, but by that time, the deed was already done.
Imagine if your home address and phone number, or those of your friends and family, were included in that data—does it really matter if developers who use it inappropriately are suspended after the fact? All I know is that I got rid of my cell number on Facebook after an old high school friend used it as part of some creepy “business opportunity” ploy (see, you can’t even trust the people you trust). And after this latest developer policy change, I definitely won’t be adding it back.
I’ll be testing this first on my network and then pushing it to other networks(clients) that will follow my reccomendation here. Firefox is old and isn’t catching up. IE is even more current than Firefox. Google Chrome is the new star right now..:)
I just did my annual update to the Linux Counter Project which is located at counter.li.org. Once i finished my updates I was quote shocked at what I found. Out of all the machines I manage in one form or another(at least server wise) more than 90% are Linux boxes. Some of my clients have two Linux servers. Desktops are overwhelmingly Windows however. Out of 13 servers 10 of them run Linux. That’s quite amazing when you think of it. I did not have any agenda when doing this..i simply chose what i felt was the best tool for the job. Of those 10 Linux boxes there’s 4 dedicated firewalls, 1 web hosting server, three file servers, and one dedicated mailserver. The distributions represented are Astaro(1), Untangle(3), Debian(2), Centos(1)(running the Zimbra Groupware Suite), SME(1), and Zentyal(2), of former e-box fame. That’s an amazing variety that I was quite surprised to see presented. Going about my daily business it’s easy to not really realize your layouts sometimes until you do an independent audit like this and then have it stare back at you..:)
If you find you can’t surf on your Comcast connection they are having intermittent issues with their DNS server. DNS is the server that translates www.hotmail.com to the actual ip address. when those have issue you can’t surf. Comcast DNS is having issues again…which is the multiple outage they ahve suffered over the past couple of weeks. If you are not a business customer i would suggest using opendns.
Let’s get the Comcast hating blinders off.
In a message written on Tue, Nov 30, 2010 at 08:12:23PM -0600, Richard A Steenbergen wrote:
> The part that I find most interesting about this current debacle is how > Comcast has managed to convince people that this is a peering dispute, > when in reality Comcast and Level3 have never been peers of any kind. > Comcast is a FULL TRANSIT CUSTOMER of Level3, not even a paid peer. This > is no different than a Comcast customer refusing to pay their cable > modem bill because Comcast "sent them too much traffic" (i.e. the > traffic that they requested), and then demanding that Comcast pay them > instead. Comcast is essentially abusing it's (in many cases captive) > customers to extort other networks into paying them if they want > uncongested access. This is the kind of action that virtually BEGS for > government involvement, which will probably end badly for all networks.
Actually it appears to be Level 3 who fired the first PR salvo running to the FCC, if the date stamps on the statements are right. So it's really Level 3 framing as a net neutrality peering issue the fact that Comcast balked at paying them more. Netflix is today apparently delivered via Akamai, who has nodes deep inside Comcast. Maybe Akamai pays Comcast, I actually don't think that is the case from an IP transit point of view, but I think they do pay for space and power in Comcast data centers near end users. But anyway, this Netflix data is close to the user, and going over a settlement free, or customer connection. Level 3 appears to have sucked Netflix away, and wants to double dip charging Netflix for the transit, and Comcast for the transit. Worse, they get to triple dip, since they are Comcast's main fiber provider. Comcast will have to buy more fiber to haul the bits from the Equinix handoffs to the local markets where Akamai used to dump it off. Worse still, Level 3 told them mid-novemeber that the traffic would be there in december. Perhaps 45 days to provision backbone and peering to handle this, during the holiday silly season. Perhaps Level 3 wanted to quadruple dip with the expedite fees. Yet with all of this Level 3 runs to the FCC screaming net neutrality. Wow. That takes balls. Comcast did itself no favors respnding with "it's a ratio issue" rather than laying out the situation. What I wonder is why Netflix and Comcast are letting middle men like Level 3 and Akamai jerk both of them around. These two folks need to get together and deal with each other, cutting out the middle man....
So what’s the issue? Level 3 told the world that Comcast had hit it up for more money in order to deliver traffic from Level 3′s customers (such as Netflix) to Comcast’s 17 million broadband subscribers. Level 3 said Comcast’s demand for more dough violated the principles of the Open Internet, which is shorthand for net neutrality. On the other side, Comcast, said Level 3 was trying to sell itself as a CDN while not having to pay fees to Comcast as other CDNs do. In short Level 3, was calling itself a CDN to its customers and a backbone provider to Comcast. This (plus the fact that Level 3 owns one of the largest Internet backbone networks) enabled it to undercut its competitors in the CDN business because it didn’t have to pay the fees that Akamai or Limelight did to get content onto Comcast’s network.
For example, Level 3 even told people back in 2007 that it could deliver CDN services for thesame price as Internet access, a feat made possible because it owned its own networks. So when Comcast pointed out the traffic Level 3 was sending to its network would more than double to reach a 5:1 ratio when compared to the Comcast traffic sent over Level 3′s network, it was justifying its decision to act, something covered in Comcast’s peering agreement . (For detailed analysis of Comcast’s peering agreement check out this post from Vijay Gill.)
Nough said. I think Level 3 got caught trying to double dip here and it’s now crying foul. Comcast’s peering policy is clear in this instance as noted here. Level 3 is just playing on the “hey this company is huge so they are automatically evil” mindset so many Americans have grown up with(which is a fantasy BTW). I’m not saying Comcast is an angel but I don’t think they are the bad guys here.
Karl has this one nailed. Just because Comcast is the bigger of the two doesn’t make it automatically wrong. L3 is trying to push the colocation costs of CDN traffic to Comcast and L3 got Called on it. Here’s a post on NANOG I found that very well illustrates what’s really going on:
I’d never really paid attention to how Netflix delivers its content.
It’s obviously a lot of bandwidth, and likely part of the issue
here so I thought I would investigate.
Apparently Akamai has been the primary Netflix streaming source
since March. LimeLight Networks has been a secondary provider, and
it would appear those two make up the vast majority of Netflix’s
actual streaming traffic. I can’t tell if Netflix does any streaming
out of their own ASN, but if they do it appears to be minor.
Here’s a reference from the business side of things:
This is also part of the reason I went back to the very first message in
this thread to reply:
In a message written on Mon, Nov 29, 2010 at 05:28:18PM -0500, Patrick W. Gilmore wrote:
> > <http://www.marketwatch.com/story/level-3….
> > I understand that politics is off-topic, but this policy affects operational aspects of the ‘Net.
Patrick works for Akamai, it seems likely he might know more about
what is going on. Likely he can’t discuss the details, but wanted
to seed a discussion. I’d say that worked well.
I happen to be a Comcast cable modem customer. Gooling for people
who had issues getting to Netflix streaming turned up plenty of
forum posts with traceroutes to Netflix servers on Akamai and
Limelight. I did traceroutes to about 20 of them from my cable
modem, and it’s clear Comcast and Akamai and Comcast and Limelight
are interconnected quite well. Akamai does not sell IP Transit,
and I’m thinking it is extremely unlikely that Comcast is buying
transit from Limelight. I will thus conclude that these are either
peering relationships, or that they have cut some sort of special
“CDN Interconnect” deal with Comcast.
But what about Level 3? One of my friends I was chatting with on AIM
said they thought Comcast was a Level 3 customer, at least at one time.
Google to the rescue again.
Level 3 provides fiber to Comcast (20 year deal in 2004):
Level 3 provides voice services/support to Comcast:
Perhaps the most interesting though is looking up an IP on Comcast’s
local network here in my city in L3’s looking glass:
Slightly reformatting for your viewing pleasure, along with my comments:
Level3_Customer # Level 3 thinks they are a customer
Suppress_to_AS174 # Cogent
Suppress_to_AS1239 # Sprint
Suppress_to_AS1280 # ISC
Suppress_to_AS1299 # Telia
Suppress_to_AS1668 # AOL
Suppress_to_AS2828 # XO
Suppress_to_AS2914 # NTT
Suppress_to_AS3257 # TiNet
Suppress_to_AS3320 # DTAG
Suppress_to_AS3549 # GBLX
Suppress_to_AS3561 # Savvis
Suppress_to_AS3786 # LG DACOM
Suppress_to_AS4637 # Reach
Suppress_to_AS5511 # OpenTransit
Suppress_to_AS6453 # Tata
Suppress_to_AS6461 # AboveNet
Suppress_to_AS6762 # Seabone
Suppress_to_AS7018 # AT&T
Suppress_to_AS7132 # AT&T (ex SBC)
So it would appear Comcast is a transit customer of Level 3 (along with
buying a lot of other services from them). I’m going to speculate that
the list of supressed ASN’s are peers of both Level 3 and Comcast, and
Comcast is going that so those peers can’t send some traffic through
Level 3 in attempt to game the ratios on their direct connections to
Now a more interesting picture emerges. Let me emphasize that this is
AN EDUCATED GUESS on my part, and I can’t prove any of it.
Level 3 starts talking to Netflix, and offers them a sweetheart deal to
move traffic from Akamai to Level 3. Part of the reason they are
willing to go so low on the price to Netflix is they will get to double
dip by charging Netflix for the bits and charging Comcast for the bits,
since Comcast is a customer! But wait, they also get to triple dip,
they provide the long haul fiber to Comcast, so when Comcast needs more
capacity to get to the peering points to move the traffic that money
also goes back to Level 3! Patrick, from Akamai, is unhappy at losing
all the business, and/or bemused at the ruckus this will cause and
chooses to kick the hornets nest on NANOG.
One last thing, before we do some careful word parsing. CDN’s like
Akamai and LimeLight want to be close to the end user, and the
networks with end users want them to be close to the end user. It
doesn’t make sense to haul the bits across the country for any party
involved. Akamai does this by locating clusters inside providers
networks, LimeLight does it by provisioning bandwidth from their
data centers directly to distribution points on eyeball networks.
So let’s go back and look at the public statements now:
Level 3 said:
“On November 19, 2010, Comcast informed Level 3 that, for the first
time, it will demand a recurring fee from Level 3 to transmit Internet
online movies and other content to Comcast’s customers who request such
“Comcast has long established and mutually acceptable commercial
arrangements with Level 3’s Content Delivery Network (CDN) competitors
in delivering the same types of traffic to our customers. Comcast
offered Level 3 the same terms it offers to Level 3’s CDN competitors
for the same traffic.”
You can make both of these statements make sense if the real situation
is that Comcast told Level 3 they needed to act like a CDN if they were
going to host Netflix. Rather than having Comcast pay as a customer,
they needed to show up in various Comcast distribution centers around
the country where they could drop traffic off “locally”. To Comcast
this is the same deal other CDN’s get, it matches their statement. To
Level 3, this means paying a fee for bandwidth to these locations, and
being that they are Comcast locations it may even mean colocation fees
or other charges directly to Comcast. Comcast said if you’re not going
to show up and do things like the CDN players then we’re going to hold
you to a reasonable peering policy, like we would anyone else making us
run the bits the old way.
The most interesting thing to me about all of this is these companies
clearly had a close relationship, fiber, voice, and IP transit all on
long term deals. If my speculation is right I’m a bit surprised Level 3
would choose to piss off such a long term large customer by bringing
Netflix to the party like this, which is one of the reasons I doubt my
speculation a bit.
But, to bring things full circle, neither of the public statements tell
the whole story here. They each tell one small nugget, the nugget that
side wants the press to run with so they can score political points.
Business is messy, and as I’ve said throughout this thread this isn’t
about peering policies or ratios, there are deeper business interests
on both sides. This article:
Suggests Level 3 is adding 2.9 Terabits of capacity just for Netflix.
I’m sure a lot of that is going to Comcast users, since they are the
largest residential broadband ISP.
Messy. Very messy.
— Leo Bicknell – firstname.lastname@example.org – CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
This article says it better than i can. The GPL is actually now causing more issues than it solved. I would actually use the GPL to RESTRICT who can use my software and how..whereas the BSD license is truly a Free Open Source license.
Oracle owns Innodb and now MySQL. It’s time to move from Mysql to postgres. Not one of the mysql forks..but postgres. Oracle is beginning the squeeze of Sun’s properties.
Openoffice is for all intents dead. Libreoffice is where it’s at which is what i am going to be trying out for possible reccomendations.
I have been doing IT work as a volunteer for a local radio station WTHU for a few years now. Slowly but surely we have been moving along the technology track int he right direction. We have a stout server in wash state that handles our streaming. The costs for this are extremely reasonable but time have gotten tight and we have to find ways to cut costs even more. That lead to a local vendor, Swift systems, that has kindly donated a 2u rackmount server, colocation, power, and an unmetered 10 megabit port. I went into Swift Systems today to install Debian onto said server. This turned out to be quite the adventure. Some of it was totally me..i was not familiar with Debian 5. I’ve used several variants including the near ubiquitous Ubuntu(which I would NEVER put onto a server) but I wanted the real Debian. The hardware issue is the cd-rom drive. I don’t know why..but it’s sssssssllllloooowwwwwww. Painfully slow. However the rest of the box is very very fast. Debian in it’s default install mode will only allow you to configure one interface at install time. If you give it an address that does not have internet connectivity when it tries to build it’s mirror list it’ll timeout(after about 5-10 minutes) and use ONLY the cd-rom. I found this out the hard way. I was not going to do that. I tried a reinstall but again was met with the sloooow cd-rom..:) I tried to setup one interface via dhcp(so it would get a local ip) and then setup the other interface to static to no avail in the installer. I setup the ilo with another static ip in the assigned range and will have them rack the box. I should be able to get into the machine using hte ilo and then using hte console redirect instlal debian to the static range. I should be able to then build the repos properly and have a working Debian install.
Why not Centos? Centos 5 is less than 3 years from expiring. I did not want to have to do an os upgrade anytime soon. With Cent you have to reinstall for an upgrade. With debian you just run apt-get and install the new version. We will see if i can get Debian to install via the ILO. If not i’ll go with centos and deal with the os upgrade later..:)
Well i got the servers in and really didn’t want to wait for the 12u rack…mainly because it’s not int he budget right now. I took one of the servers and have installed untangle on it. I now have 4 network cards in the thing. One is red(internet), one is blue(free public wifi) and one is green(church’s internal network). The 4th one is for future use(which I already have a plan for). What are the specs of this box? It is an IBM x335 with dual xeon 2.8ghz cpu’s with HT , 4 gigs of ram, and two 36 gig 10k U320 SCSI hdd’s in hardware raid 1. The thing just smokes..:) I’m waiting for a couple of major events to really test the box:
1. the Don Piper conference we are having
2. Upward basketball.
Upward is going to be the bigger test as we’ll have hundreds of folks inside the new wing from 9am to 6pm sat and sun every week for about 3 months. I’m hoping to get at least 20 folks on that so i can see how this box handles it.
I had a Dell Poweredge 1800 running Astaro as the firewall until this donation came in. Our e-mail is run by a company called powweb and I have been hearing for a long time about unreliable service, crashing interfaces, and other issues for months now. since the Dell is 64 bit compatible I decided to press that one into use as the new church e-mail server. The test for the firewall is can it handle everything i’m going to throw at it? e-mail, content filtering, anti-virus scanning, packet inspection, remote access..etc etc etc. My research tells me it will. The most fascinating thing about Untangle is it makes heavy use of Java. Java is at the core of the entire system and ALL traffic passes through this Java core. So far it’s worked without a hitch. I’ve setup some simple traffic priority rules that say the church’s traffic has the highest priority and the free wifi has the lowest. I’ll be watching the server closely to see how it does..not that I’m anticipating problems..but this is a new product that has impressed me..and i want to see it work under load as i look at the innards to see how it works..:) Cost for all of this? 105$ and that was just to cover shipping,,:) All of the software is free.
I just need to get the final list of current mailboxses and get the DNS switched over. Staff meeting this Monday to see if they’ll give the green light. I have found several extensions(called zimlets) that really extend the featureset of the Zimbra platform. I know have built into the platform:
1. Automatic detection of UPS and FEDEX tracking numbers. The system will automatically highlight tracking numbers and auto-create hyperlinks. Clicking the link takes directly to your tracking information
2. Daily summary of tasks and appointments. When the user logs in the zimlets checks their calednar for that day and sends them appriate reminders.
3. Post Office tracking. Along the same lines as the UPS Fedex trackers…this also will grab post office trackings form several other countries as well.
4. Social network integration. Twitter, Facebook and a couple of others can be integrated into your Zimbra interface
These are in addition to the base feature set available with the free version. All of these zimlets are free as well. The best thing….no more outlook. FBC users can get to this anywhere they wish to via a https secured channel..:)
You can read about the donation here. I have three IBM x335’s on the way with dual p-4 xeon 2.8 ghz cpu’s, dual 36 gig 10k rpm SCSI drives with hardware raid 1, 4 gigs of ram, all the cables needed including ILO, and rails. All for the cost of shipping. Why am I posting about it here? I run the network at my church. This will be the first time I can start something like this from the ground up and document what I do, how I do it, and what hardware and software I do it with. I will also be able to show just how much free software can do and still integrate with an established Active Directory layout as well. It’s something for other potential NPO clients to be able to see what some creative thinking can accomplish for little or no cost…:) Stay tuned I’ve created a whole new category for this..:)
After some internal testing and research i can honestly say that virtualization may not be the best solution except for larger deployments. For the same money(or less) than either upgrading one server to be able to host multiple vm’s or the purchase of a new server that’s capable of doing that I can build two machines around Intel Atom d510’s that together would draw less at MAX load than the new or upgraded machine will draw at half load. When i do my own server refresh(and for client’s as well) i’ll be looking at the Atom solutions instead of virtualization. If the client in question has a more cpu intensive workload than the Atom can handle then virtualization might be an option. However, from what I am seeing in various forums the Atom based servers can handle quite a bit more than most folks give them credit for.
This is the primary reason Unix folks remove the computer, make an image for forensics, and then rebuild from a known good source. Windows folks have yet to figure this one out. I take the same philosophy towards malware that Unix admins do..nuke the box…because you can’t trust it’s clean once it’s been compromised.
In one incident, a sports bar in Miami was targeted by attackers who used a custom-designed rootkit that installed itself in the machines kernel, making detection particularly difficult. The rootkit had a simple, streamlined design and was found on a server that handled credit card transactions at the bar. It searched for credit card track data, gathered whatever it found and dumped the data to a hidden folder on the machine. The attacker behind the rootkit took the extra step of changing a character in the track data that DLP software looks for in order to identify credit card data as its leaving a network, making the exfiltration invisible to the security system.
I totally agree with SFGATE on this one..and the reasons given for opposition are totally accurate and standing within Constitutional principles.
Now this would be interesting. If this is actually true then instead of me having to get a smartphone with the high price of the cell carriers scamming built in I can get a netbook running android….hrmmmm…I like this idea. If it works out I might just leave the notebook at home when i go out.
The author makes some great points here. Take a gander.
Right now cloud computing isn’t a security enhancement..it’s a security nightmare. Most cloud apps actually require you to download and install an executable file that then connects to the cloud. The operating system requirements are? Windows…most of the time. I would like to see the cloud vendors support a truly web-based experience..like Google. Then you wouldn’t need windows..Linux would work. Your costs go through the floor. No high costs for server operating system software…no high costs for desktop operating system software. There are a couple of gotchas. One is most applications don’t yet run on a Linux desktop or a true cloud. Secondly, disallowing access from outside your company. This isn’t as easy to solve as it seems since it’s a web-based thing..considering the low costs though just get your company a static ip(s) and tell the cloud vendor only those ip(s) are allowed to access that app. Then you have the best of both worlds..in a very brief nutshell. If you are interested in more details let me know. I might fire up the podcast machine..:)
There are some serious errors in this..i’ll address them inline.
Windows Server vs. Linux
June 8, 2010 —
This debate arouses vehement opinions, but according to one IT consultant who spends a lot of time with both Windows and Linux, it’s a matter of arguing which server OS is the most appropriate in the context of the job that needs to be done, based on factors such as cost, performance, security and application usage.
“With Linux, the operating system is effectively free,” says Phil Cox, principal consultant with SystemExperts. “With Microsoft, there are licensing fees for any version, so cost is a factor.” And relative to any physical hardware platform, Linux performance appears to be about 25% faster, Cox says.
That’s at a minimum. It’s often much higher. Windows server core is an attempt to regain some of that base speed by jettisoning the gui.
Combine that with the flexibility you have to make kernel modifications, something you can’t do with proprietary Windows, and there’s a lot to say about the benefits of open-source Linux. But that’s not the whole story, Cox points out, noting there are some strong arguments to be made on behalf of Windows, particularly for the enterprise.
For instance, because you can make kernel modifications to Linux, the downside of that is “you need a higher level of expertise to keep a production environment going,” Cox says, noting a lot of people build their own packages and since there are variations of Linux, such as SuSE or Debian, special expertise may be needed.
Windows offers appeal in that “it’s a stable platform, though not as flexible,” Cox says. When it comes to application integration, “Windows is easier,” he says.
Windows most assuredly is NOT easier. by the time you get to managing patches, default configuration tweaking, the layers of security you have to pile on to have a prayer of a chance to NOT get compromised…Linux is MUCH easier. I can turn up a Linux server from ground zero to the base install in under an hour WITHOUT USING AN IMAGE. Updates? One run and one reboot..Windows? It’ll be multiples of each…it goes on and on and on.
Windows access control “blows Linux out of the water,” he claims. “In a Windows box, you can set access-control mechanisms without a software add-on.”
He apparently hasn’t heard of chmod and chown. You can do everything you want right from the cli. I tend to use a package called Webmin which is installed from the command line and run from a web browser…i don’t have to pay the Windows gui performance tax.
Patching is inevitable with either Windows or Linux, and in this arena, Cox says that it’s easier to patch Windows. Microsoft is the only source to issue Windows patches. With Linux, you have to decide whether to go to an open-source entity for patches, for instance the one for OpenSSH, or wait until a commercial Linux provider, such as Red Hat, provides a patch.
OR you can use a community variant called Centos(to reference Redhat) which is non-commercial…OR you can use the granddaddy of Linux distros, Debian, who has the basis of many many other distributions. You don’t have to go to openssl because the distros are hooked right into the package vendors. Here’s one point the author missed…speed of patches. Microsoft WON’T patch until there’s an active exploit outside of it’s monthly cycle. Most Linux distros patch within 24 hours of release..24 HOURS..not DAYS or MONTHS…HOURS. Let’s see Microsoft do that…and do it reliably with hosing it’s users systems that have gotten infested due to their continued bad design choices.
Microsoft presents a monolithic single point of contact for business customers, whereas “In Linux, you need to know where to go for what,” which makes it more complicated, Cox says. “There’s no such thing as a TechNet for Linux,” he says. Linux users need to be enthusiastic participants in the sometimes clannish open-source community to get the optimum results.
Oh and Microsofties aren’t clannish? LOL! Let me tell you something..if you don’t drink the Microsoft Kool-aid totally you won’t be in the MS forums and MS evangelists sites..trust me I know about this.
These kind of arguments may indicate why Windows Server continues to have huge appeal in the enterprise setting, though some vertical industries, such as financial firms, have become big-time Linux users.
The only reason Windows keeps hanging around like a fungus is because the third party app vendors have not yet started coding for Linux in large numbers yet…that’s coming. Once folks can see the advantages to Linux MS will have to tighten up their code or die.
Linux and open-source applications are popular in the Internet-facing extranet of the enterprise, Cox notes. And Linux has become a kind of industrial technology for vendors which use it in a wide range of products and services — for instance Amazon’s EC2 computing environment data centers rely on Xen-based Linux servers.
Know why? Security is one, reliability is another, patching is stupid easy(run updates on live system. if no kernel updates no reboot needed..at all). Windows hangs around right now because third party vendors aren’t coding…yet. MS right now does have it’s place and i will recommend windows on the back only when it’s truly necessary. The comments on this article do a far better job of eviscerating the author than I do..:)
The times really are getting smaller and smaller. You really need to have 5 or more machines active. Running e-mail out of house can be done but it’s not easy as exchange really wants to be the mail hub(which makes since as it IS a full featured mail server). The issues are the high cost as well as the high system requirements. You really need a minimum of 8 gigs of ram and you really need true hardware raid 1 or higher. I have found dual cores to suffice if they are fast enough but quads are so cheap there’s no reason to skimp. Unfortunately this is another example of Ms products getting very very bloated.
I think for my small clients server 2008 standard or even server 2008 foundation for simple AD and file sharing is going to be the best bet. If you aren’t tied to the MIcrosoft backed(say folks who run progbrams that require a windows server to share databases) then I have a couple of alternatives:
Both of these are Linux based groupware suites..and you can’t beat hte price…free. If you aren’t tied to a Microsoft backend and are a small shop there’s no longer any need to spend 2-3k on a ms based server…you can get a $500 server and use one of these packages. The only additional cost is an installation fee from ECC…that’s it.
If you are tied to a Microsoft backend then SBS may be a good fit for you. I have been testing using Google apps for business for my own business and personal domains…and it’s worked out well. With a few addon plugins you can use Mozilla to calendar and check e-mail. With a few setting changes you can also share calendars between users. It’s not quite as flexible as Outlook/exchange…yet but Google is constantly putting new features in that means you don’t have to be shackled to the exchange/outlook pair anymore.
Now that there are truly some alternatives it only means good things for my clients as i can now give them the most effective options for their businesses. SBS 2003 was a great package at a great price…SBS 2008 has gotten really really expensive. Frankly those consultants that have hooked themselves exclusively to the MS train are doing their clients a grave disservice in my opinion.
Oucies. That’s horrendously expensive. I’m assuming he’s getting great service..:) There are less expensive options out there…like mine..:)
He said that with 70 per cent of the US market for Internet search, Google is the gateway to the Internet. How it tweaks its proprietary search algorithms can ensure a business's success or doom it to failure.
Give me a break. My business works just fine without Goggle. I do nothing to “promote” myself on Google nor do i worry about my search rankings. Google is not a monopoly. Users have a choice. If they don’t like Google they can go to another search engine. I use Google because it works..plain and simple. That’s why most of it’s users continue to use it as well. Microsoft is a true monopoly. You can use Linux or apple but trust me your functional level and ease of use go downhill quickly.
I have converted two of my domains to gmail via Google apps. I am now using Thunderbird via Secured IMAP to check my e-mail for those accounts. I also now have access to my business e-mail from my phone without having to get a higher priced data plan. I am using two addons(detailed at the linked post which will be copied below) for Thunderbird to accomplish this. So far it works great. I am going to try it for a bit and see how well it works out.
@Update: 2nd November, 2007: Updated guide for Lightning v0.7 and Provider 0.3.1 releases — Jonny
For a long time I have been looking for a rock solid calendaring system. I’ve gotten too used to working for companies who have Microsoft Exchange (or, God forbid, Scalix) installed which allow me to edit and update a calendar from multiple locations and even sync it with my Mobile Phone. When I first heard of Google Calendar I hoped that I would be able to enjoy such benefits again, but I am not a great fan of web-apps, and prefer a nice, solid desktop client to do my email / organisation from.
Queue Lightning, the calendaring extension for Thunderbird which brings the desktop email app one step closer to becoming a viable alternative to Microsoft Outlook. Installation can be a little bit confusing and you must remember that this add-on is still in the 0.x stages, so may be a tad unstable at times (but that’s ok, we love this kind of thing!)
Open up Thunderbird (I am using the 188.8.131.52 release) and on the Top Menu, go to:
Tools -> Add-ons
When the Add-ons window opens, click on the Install button on the bottom left and paste in the following URL to install the latest release of Lightning (Windows Only, Linux / Mac users will need to get this link by copying the XPI download path from the Mozilla Add-on repository, located here.
Win32 Lightning Add-On XPI Download Link:
If you get a warning similar to “Lightning could not be installed because it is not compatible with Firefox” then you are trying to install the XPI directly into Firefox. Instead, you need to either “open” the link from inside the Thunderbird Add-Ons Install Window, or save the XPI to your desktop and then drag it into the Thunderbird Add-Ons Window.
Once you have installed the Lightning Extension, Thunderbird will ask you to restart. Upon restarting you will be greeted with a new Sidebar on the right displaying tasks and events and a tool bar underneath your folder list.
This is all well and good and provides us with an easy to use local calendar, but that’s not much use if you wanted to update it at work, or on the road / mobile device. This is where the Provider Add-on comes in to play.
Provider allows bidirectional syncing between the Lightning Calendaring Extension in Thunderbird and Google’s GCal Service. This is possible because Google, being the lovely chaps that they are, decided to opt for the iCalendar standard in GCal, well done chaps 🙂
Installation of Provider is pretty similar to that of Lightning. Again, go to the Add-ons Window (Tools -> Add-Ons) and Install the XPI available for download from Provider’s Page in the Mozilla Add-on repository.
Win32 Provider Add-On XPI Download Link:
Again, once installed, Thunderbird will have to be restarted.
Now, the last piece of the Pie is to tie our Google Calendar into our Lightning Calendar. First of all, you will need to log into your Google Calendar account. Once you are at the main page, click on “Settings” from the Top Right Menu:
Once on the settings page, you need to drill down into the “Calendars Settings” screen and then click on your Calendar from the list (I only had a single calendar.)
Now, finally, you need to copy the URL of your Private Address XML Feed into the clipboard.
You’re done in Google Calendar for now and we can head back to Thunderbird to finally wrap this tutorial up ;). Once you are back in Thunderbird, you need to create a new calendar in Lightning. You can do this by clicking on the following Menu item:
File -> New -> Calendar…
Upon clicking the New Calendar menu item, another window will appear. The first option is the location of your Calendar – select “On the Network” and click Next.
The next option allows you to specify the Format of the Calendar, slect the “Google Calendar” radio button (if you don’t have a Google Calendar radio button, make sure your Provider Extension is installed correctly). In the location input box, paste in your Google Calendar Private Address XML Feed that we extracted above, and click Next.
The next window asks you to give your new Calendar a Name and a Colour, I will leave these entirely up to you 😉
Finally (yes, at last) you will have a “Google Calendar Login” window which will ask for your Google Account login. If you only have a single Google Calendar, Provider will have automagically extracted your username from the XML feed you just specified; however, just double check that it reads @GMAIL.COM. Then enter your usual GMail password.
Well done, you can now enjoy the many benefits of being able to view and update your Google Calendar directly from Thunderbird – nice work 😉
It’s very simple. Yahoo is abandoning their search. All of their search results are going to come from Bing. How bad is Bing? I can search Microsoft’s site with Google faster, with more accuracy, and more relevance than Bing can hope for.
If MajicJack doesn’t get sued out of existence this is an excellent value..that means you can drop your plan back since you won’t be using as many minutes. If it only worked for CDMA phones then i could relaly save some minutes on my Sprint account. I will not be buying one of those overpriced Femtocells.
I am not normally a fan of anything gov’t but this time the VA has developed a system for electronic health records that is well-rounded, stable, highly customizable, has tons of features into which more can be added, and is recognized as a great package. It’s called VistA(visit a..not vista). I’ll be looking into this as well as the alliance that has formed around it for my medical practitioner clients. Full article follows:
Code RedHow software companies could screw up Obama’s health care reform.
The central contention of Barack Obama’s vision for health care reform is straightforward: that our health care system today is so wasteful and poorly organized that it is possible to lower costs, expand access, and raise quality all at the same time—and even have money left over at the end to help pay for other major programs, from bank bailouts to high-speed rail.
It might sound implausible, but the math adds up. America spends nearly twice as much per person as other developed countries for health outcomes that are no better. As White House budget director Peter Orszag has repeatedly pointed out, the cost of health care has become so gigantic that pushing down its growth rate by just 1.5 percentage points per year would free up more than $2 trillion over the next decade.
The White House also has a reasonably accurate fix on what drives these excessive costs: the American health care system is rife with overtreatment. Studies by Dartmouth’s Atlas of Health Care project show that as much as thirty cents of every dollar in health care spending goes to drugs and procedures whose efficacy is unproven, and the system contains few incentives for doctors to hew to treatments that have been proven to be effective. The system is also highly fragmented. Three-quarters of Medicare spending goes to patients with five or more chronic conditions who see an annual average of fourteen different physicians, most of whom seldom talk to each other. This fragmentation leads to uncoordinated care, and is one of the reasons why costly and often deadly medical errors occur so frequently.
Almost all experts agree that in order to begin to deal with these problems, the health care industry must step into the twenty-first century and become computerized. Astonishingly, twenty years after the digital revolution, only 1.5 percent of hospitals have integrated IT systems today—and half of those are government hospitals. Digitizing the nation’s medical system would not only improve patient safety through better-coordinated care, but would also allow health professionals to practice more scientifically driven medicine, as researchers acquire the ability to mine data from millions of computerized records about what actually works.
It would seem heartening, then, that the stimulus bill President Obama signed in February contains a whopping $20 billion to help hospitals buy and implement health IT systems. But the devil, as usual, is in the details. As anybody who’s lived through an IT upgrade at the office can attest, it’s difficult in the best of circumstances. If it’s done wrong, buggy and inadequate software can paralyze an institution.
Consider this tale of two hospitals that have made the digital transition. The first is Midland Memorial Hospital, a 371-bed, three-campus community hospital in southern Texas. Just a few years ago, Midland Memorial, like the overwhelming majority of American hospitals, was totally dependent on paper records. Nurses struggled to decipher doctors’ scribbled orders and hunt down patients’ charts, which were shuttled from floor to floor in pneumatic tubes and occasionally disappeared into the ether. The professionals involved in patient care had difficulty keeping up with new clinical guidelines and coordinating treatment. In the normal confusion of day-to-day practice, medical errors were a constant danger.
This all changed in 2007 when Midland completed the installation of a health IT system. For the first time, all the different doctors involved in a patient’s care could work from the same chart, using electronic medical records, which drew data together in one place, ensuring that the information was not lost or garbled. The new system had dramatic effects. For instance, it prompted doctors to follow guidelines for preventing infection when dressing wounds or inserting IVs, which in turn caused infection rates to fall by 88 percent. The number of medical errors and deaths also dropped. David Whiles, director of information services for Midland, reports that the new health IT system was so well designed and easy to use that it took less than two hours for most users to get the hang of it. “Today it’s just part of the culture,” he says. “It would be impossible to remove it.”
Things did not go so smoothly at Children’s Hospital of Pittsburgh, which installed a computerized health system in 2002. Rather than a godsend, the new system turned out to be a disaster, largely because it made it harder for the doctors and nurses to do their jobs in emergency situations. The computer interface, for example, forced doctors to click a mouse ten times to make a simple order. Even when everything worked, a process that once took seconds now took minutes—an enormous difference in an emergency-room environment. The slowdown meant that two doctors were needed to attend to a child in extremis, one to deliver care and the other to work the computer. Nurses also spent less time with patients and more time staring at computer screens. In an emergency, they couldn’t just grab a medication from a nearby dispensary as before—now they had to follow the cumbersome protocols demanded by the computer system. According to a study conducted by the hospital and published in the journal Pediatrics, mortality rates for one vulnerable patient population—those brought by emergency transport from other facilities—more than doubled, from 2.8 percent before the installation to almost 6.6 percent afterward.
Why did similar attempts to bring health care into the twenty-first century lead to triumph at Midland but tragedy at Children’s? While many factors were no doubt at work, among the most crucial was a difference in the software installed by the two institutions. The system that Midland adopted is based on software originally written by doctors for doctors at the Veterans Health Administration, and it is what’s called “open source,” meaning the code can be read and modified by anyone and is freely available in the public domain rather than copyrighted by a corporation. For nearly thirty years, the VA software’s code has been continuously improved by a large and ever-growing community of collaborating, computer-minded health care professionals, at first within the VA and later at medical institutions around the world. Because the program is open source, many minds over the years have had the chance to spot bugs and make improvements. By the time Midland installed it, the core software had been road-tested at hundred of different hospitals, clinics, and nursing homes by hundreds of thousands of health care professionals.
The software Children’s Hospital installed, by contrast, was the product of a private company called Cerner Corporation. It was designed by software engineers using locked, proprietary code that medical professionals were barred from seeing, let alone modifying. Unless they could persuade the vendor to do the work, they could no more adjust it than a Microsoft Office user can fine-tune Microsoft Word. While a few large institutions have managed to make meaningful use of proprietary programs, these systems have just as often led to gigantic cost overruns and sometimes life-threatening failures. Among the most notorious examples is Cedars-Sinai Medical Center, in Los Angeles, which in 2003 tore out a “state-of-the-art” $34 million proprietary system after doctors rebelled and refused to use it. And because proprietary systems aren’t necessarily able to work with similar systems designed by other companies, the software has also slowed what should be one of the great benefits of digitized medicine: the development of a truly integrated digital infrastructure allowing doctors to coordinate patient care across institutions and supply researchers with vast pools of data, which they could use to study outcomes and develop better protocols.
Unfortunately, the way things are headed, our nation’s health care system will look a lot more like Children’s and Cedars-Sinai than Midland. In the haste of Obama’s first 100 days, the administration and Congress crafted the stimulus bill in a way that disadvantages open-source vendors, who are upstarts in the commercial market. At the same time, it favors the larger, more established proprietary vendors, who lobbied to get the $20 billion in the bill. As a result, the government’s investment in health IT is unlikely to deliver the quality and cost benefits the Obama administration hopes for, and is quite likely to infuriate the medical community. Frustrated doctors will give their patients an earful about how the crashing taxpayer-financed software they are forced to use wastes money, causes two-hour waits for eight-minute appointments, and constrains treatment options.
Done right, digitized health care could help save the nation from insolvency while improving and extending millions of lives at the same time. Done wrong, it could reconfirm Americans’ deepest suspicions of government and set back the cause of health care reform for yet another generation.
pen-source software has no universally recognized definition. But in general, the term means that the code is not secret, can be utilized or modified by anyone, and is usually developed collaboratively by the software’s users, not unlike the way Wikipedia entries are written and continuously edited by readers. Once the province of geeky software aficionados, open-source software is quickly becoming mainstream. Windows has an increasingly popular open-source competitor in the Linux operating system. A free program called Apache now dominates the market for Internet servers. The trend is so powerful that IBM has abandoned its propriety software business model entirely, and now gives its programs away for free while offering support, maintenance, and customization of open-source programs, increasingly including many with health care applications. Apple now shares enough of its code that we see an explosion of homemade “applets” for the iPhone—each of which makes the iPhone more useful to more people, increasing Apple’s base of potential customers.
If this is the future of computing as a whole, why should U.S. health IT be an exception? Indeed, given the scientific and ethical complexities of medicine, it is hard to think of any other realm where a commitment to transparency and collaboration in information technology is more appropriate. And, in fact, the largest and most successful example of digital medicine is an open-source program called VistA, the one Midland chose.
VistA was born in the 1970s out of an underground movement within the Veterans Health Administration known as the “Hard Hats.” The group was made up of VA doctors, nurses, and administrators around the country who had become frustrated with the combination of heavy caseloads and poor record keeping at the institution. Some of them figured that then-new personal and mini computers could be the solution. The VA doctors pioneered the nation’s first functioning electronic medical record system, and began collaborating with computer programmers to develop other health IT applications, such as systems that gave doctors online advice in making diagnoses and settling on treatments.
The key advantages of this collaborative approach were both technical and personal. For one, it allowed medical professionals to innovate and learn from each other in tailoring programs to meet their own needs. And by involving medical professionals in the development and application of information technology, it achieved widespread buy-in of digitized medicine at the VA, which has often proven to be a big problem when propriety systems are imposed on doctors elsewhere.
This open approach allowed almost anyone with a good idea at the VA to innovate. In 1992, Sue Kinnick, a nurse at the Topeka, Kansas, VA hospital, was returning a rental car and saw the use of a bar-code scanner for the first time. An agent used a wand to scan her car and her rental agreement, and then quickly sent her on her way. A light went off in Kinnick’s head. “If they can do this with cars, we can do this with medicine,” she later told an interviewer. With the help of other tech-savvy VA employees, Kinnick wrote software, using the Hard Hats’ public domain code, that put the new scanner technology to a new and vital use: preventing errors in dispensing medicine. Under Kinnick’s direction, patients and nurses were each given bar-coded wristbands, and all medications were bar-coded as well. Then nurses were given wands, which they used to scan themselves, the patient, and the medication bottle before dispensing drugs. This helped prevent four of the most common dispensing errors: wrong med, wrong dose, wrong time, and wrong patient. The system, which has been adopted by all veterans hospitals and clinics and continuously improved by users, has cut the number of dispensing errors in half at some facilities and saved thousands of lives.
At first, the efforts of enterprising open-source innovators like Kinnick brought specific benefits to the VA system, such as fewer medical errors and reduced patient wait times through better scheduling. It also allowed doctors to see more patients, since they were spending less time chasing down paper records. But eventually, the open-source technology changed the way VA doctors practiced medicine in bigger ways. By mining the VA’s huge resource of digitized medical records, researchers could look back at which drugs, devices, and procedures were working and which were not. This was a huge leap forward in a profession where there is still a stunning lack of research data about the effectiveness of even the most common medical procedures. Using VistA to examine 12,000 medical records, VA researchers were able to see how diabetics were treated by different VA doctors, and by different VA hospitals and clinics, and how they fared under the different circumstances. Those findings could in turn be communicated back to doctors in clinical guidelines delivered by the VistA system. In the 1990s, the VA began using the same information technology to see which surgical teams or hospital managers were underperforming, and which deserved rewards for exceeding benchmarks of quality and safety.
Thanks to all this effective use of information technology, the VA emerged in this decade as the bright star of the American health system in the eyes of most health-quality experts. True, one still reads stories in the papers about breakdowns in care at some VA hospitals. That is evidence that the VA is far from perfect—but also that its information system is good at spotting problems. Whatever its weaknesses, the VA has been shown in study after study to be providing the highest-quality medical care in America by such metrics as patient safety, patient satisfaction, and the observance of proven clinical protocols, even while reducing the cost per patient.
Following the organization’s success, a growing number of other government-run hospitals and clinics have started adapting VistA to their own uses. This includes public hospitals in Hawaii and West Virginia, as well as all the hospitals run by the Indian Health Service. The VA’s evolving code also has been adapted by providers in many other countries, including Germany, Finland, Malaysia, Brazil, India, and, most recently, Jordan. To date, more than eighty-five countries have sent delegations to study how the VA uses the program, with four to five more coming every week.
roprietary systems, by contrast, have gotten a cool reception. Although health IT companies have been trying to convince hospitals and clinics to buy their integrated patient-record software for more than fifteen years, only a tiny fraction have installed such systems. Part of the problem is our screwed-up insurance reimbursement system, which essentially rewards health care providers for performing more and more expensive procedures rather than improving patients’ welfare. This leaves few institutions that are not government run with much of a business case for investing in health IT; using digitized records to keep patients healthier over the long term doesn’t help the bottom line.
But another big part of the problem is that proprietary systems have earned a bad reputation in the medical community for the simple reason that they often don’t work very well. The programs are written by software developers who are far removed from the realities of practicing medicine. The result is systems which tend to create, rather than prevent, medical errors once they’re in the hands of harried health care professionals. The Joint Commission, which accredits hospitals for safety, recently issued an unprecedented warning that computer technology is now implicated in an incredible 25 percent of all reported medication errors. Perversely, license agreements usually bar users of proprietary health IT systems from reporting dangerous bugs to other health care facilities. In open-source systems, users learn from each other’s mistakes; in proprietary ones, they’re not even allowed to mention them.
If proprietary health IT systems are widely adopted, even more drawbacks will come sharply into focus. The greatest benefits of health IT—and ones the Obama administration is counting on—come from the opportunities that are created when different hospitals and clinics are able to share records and stores of data with each other. Hospitals within the digitized VA system are able to deliver more services for less mostly because their digital records allow doctors and clinics to better coordinate complex treatment regimens. Electronic medical records also produce a large collection of digitized data that can be easily mined by managers and researchers (without their having access to the patients’ identities, which are privacy protected) to discover what drugs, procedures, and devices work and which are ineffective or even dangerous. For example, the first red flags about Vioxx, an arthritis medication that is now known to cause heart attacks, were raised by the VA and large private HMOs, which unearthed the link by mining their electronic records. Similarly, the IT system at the Mayo Clinic (an open-source one, incidentally) allows doctors to personalize care by mining records of specific patient populations. A doctor treating a patient for cancer, for instance, can query the treatment outcomes of hundreds of other patients who had tumors in the same area and were of similar age and family backgrounds, increasing odds that they choose the most effective therapy.
But in order for data mining to work, the data has to offer a complete picture of the care patients have gotten from all the various specialists involved in their treatment over a period of time. Otherwise it’s difficult to identify meaningful patterns or sort out confounding factors. With proprietary systems, the data is locked away in what programmers call “black boxes,” and cannot be shared across hospitals and clinics. (This is partly by design; it’s difficult for doctors to switch IT providers if they can’t extract patient data.) Unless patients get all their care in one facility or system, the result is a patchwork of digital records that are of little or no use to researchers. Significantly, since proprietary systems can’t speak to each other, they also offer few advantages over paper records when it comes to coordinating care across facilities. Patients might as well be schlepping around file folders full of handwritten charts.
Of course, not all proprietary systems are equally bad. A program offered by Epic Systems Corporation of Wisconsin rivals VistA in terms of features and functionality. When it comes to cost, however, open source wins hands down, thanks to no or low licensing costs. According to Dr. Scott Shreeve, who is involved in the VistA installations in West Virginia and elsewhere, installing a proprietary system like Epic costs ten times as much as VistA and takes at least three times as long—and that’s if everything goes smoothly, which is often not the case. In 2004, Sutter Health committed $154 million to implementing electronic medical records in all the twenty-seven hospitals it operated in Northern California using Epic software. The project was supposed to be finished by 2006, but things didn’t work out as planned. Sutter pulled the plug on the project in May of this year, having completed only one installation and facing remaining cost estimates of $1 billion for finishing the project. In a letter to employees, Sutter executives explained that they could no long afford to fund employee pensions and also continue with the Epic buildout.
nfortunately, billions of taxpayers’ dollars are about to be poured into expensive, inadequate proprietary software, thanks to a provision in the stimulus package. The bill offers medical facilities as much as $64,000 per physician if they make “meaningful use” of “certified” health IT in the next year and a half, and punishes them with cuts to their Medicare reimbursements if they don’t do so by 2015. Obviously, doctors and health administrators are under pressure to act soon. But what is the meaning of “meaningful use”? And who determines which products qualify? These questions are currently the subject of bitter political wrangling.
Vendors of proprietary health IT have a powerful lobby, headed by the Healthcare Information and Management Systems Society, a group with deep ties to the Obama administration. (The chairman of HIMSS, Blackford Middleton, is an adviser to Obama’s health care team and was instrumental in getting money for health IT into the stimulus bill.) The group is not openly against open source, but last year when Rep. Pete Stark of California introduced a bill to create a low-cost, open-source health IT system for all medical providers through the Department of Health and Human Services, HIMSS used its influence to smash the legislation. The group is now deploying its lobbying clout to persuade regulators to define “meaningful use” so that only software approved by an allied group, the Certification Commission for Healthcare Information Technology, qualifies. Not only are CCHIT’s standards notoriously lax, the group is also largely funded and staffed by the very industry whose products it is supposed to certify. Giving it the authority over the field of health IT is like letting a group controlled by Big Pharma determine which drugs are safe for the market.
Even if the proprietary health IT lobby loses the battle to make CCHIT the official standard, the promise of open-source health IT is still in jeopardy. One big reason is the far greater marketing power that the big, established proprietary venders can bring to bear compared to their open-source counterparts, who are smaller and newer on the scene. A group of proprietary industry heavyweights, including Microsoft, Intel, Cisco, and Allscripts, is sponsoring the Electronic Health Record Stimulus Tour, which sends teams of traveling sales representatives to tell local doctors how they can receive tens of thousands of dollars in stimulus money by buying their products—provided that they “act now.” For those medical professionals who can’t make the show personally, helpful webcasts are available. The tour is a variation on a tried-and-true strategy: when physicians are presented with samples of pricey new name-brand substitutes for equally good generic drugs, time and again they start prescribing the more expensive medicine. And they are likely to be even more suggestible when it comes to software because most don’t know enough about computing to evaluate vendors’ claims skeptically.
What can be done to counter this marketing offensive and keep proprietary companies from locking up the health care IT market? The best and simplest answer is to take the stimulus money off the table, at least for the time being. Rather than shoveling $20 billion into software that doesn’t deliver on the promise of digital medicine, the government should put a hold on that money pending the results of a federal interagency study that will be looking into the potential of open-source health IT and will deliver its findings by October 2010.
As it happens, that study is also part of the stimulus bill. The language for it was inserted by West Virginia Senator Jay Rockefeller, who has also introduced legislation that would help put open-source health IT on equal footing with the likes of Allscripts and Microsoft. Building on the systems developed by the VA and Indian Health Services, Rockefeller’s bill would create an open-source government-sponsored “public utility” that would distribute VistA-like software, along with grants to pay for installation and maintenance. The agency would also be charged with developing quality standards for open-source health IT and guidelines for interoperability. This would give us the low-cost, high-quality, fully integrated and proven health IT infrastructure we need in order to have any hope of getting truly better health care.
Delaying the spending of that $20 billion would undoubtedly infuriate makers of proprietary health software. But it would be welcomed by health care providers who have long resisted—partly for good reason—buying that industry’s product. Pushing them to do so quickly via the stimulus bill amounts to a giant taxpayer bailout of health IT companies whose business model has never really worked. That wouldn’t just be a horrendous waste of public funds; it would also lock the health care industry into software that doesn’t do the job and would be even more expensive to get rid of later.
As the administration and Congress struggle to pass a health care reform bill, questions about which software is best may seem relatively unimportant—the kind of thing you let the “tech guys” figure out. But the truth is that this bit of fine print will determine the success or failure of the whole health care reform enterprise. So it’s worth taking the time to get the details right.
Whoopsie so now it’s pushed into the underground who are most assuredly taking advantage of it. Sorry MS your usual security by obscurity failed many years ago.
It seems you can’t buy a windows PC anymore without tons of crap on it that you never use and just junks up your computer. This FREE program now seeks out and destroys all the crapware that comes on most systems. This bad boy is going onto my thumbdrive now.
I am normally not a fan of government recommendations due to the fact they are usually outdated and irrelevant when they are released. However a paper releasede by NIST is the large exception to this rule. To security professionals these are no brainers but I still run into a ton of resistance implementing just a small portion of these. If all computer users were to follow these as appropriate(contact your consultant for information..like me..) then you can see how we can tailor a security solution that will drastically reduce your exposure to all online threats.
Small Business Information Security:
Small Business Information Security:
Computer Security Division
Information Technology Laboratory
National Institute of Standards and Technology
Gaithersburg, MD 20899
U.S. Department of Commerce
Gary Locke, Secretary
National Institute of Standards and Technology
Patrick D. Gallagher, Deputy Director
The author, Richard Kissel, wishes to thank his colleagues and reviewers who contributed greatly to the document’s development. Special thanks goes to Mark Wilson, Shirley Radack, and Carolyn Schmidt for their insightful comments and suggestions. Kudos to Kevin Stine for his awesome Word editing skills.
Table of Contents
2. The “absolutely necessary” actions that a small business should take to protect its information, systems, and networks………………………………………………………………………………………………………………….2
2.1 Protect information/systems/networks from damage by viruses, spyware, and other malicious code………………………3
2.2 Provide security for your Internet connection…………………………………………………………………………………………………..3
2.3 Install and activate software firewalls on all your business systems…………………………………………………………………….3
2.4 Patch your operating systems and applications…………………………………………………………………………………………………4
2.5 Make backup copies of important business data/information……………………………………………………………………………..5
2.6 Control physical access to your computers and network components………………………………………………………………….6
2.7 Secure your wireless access point and networks……………………………………………………………………………………………….6
2.8 Train your employees in basic security principles…………………………………………………………………………………………….6
2.9 Require individual user accounts for each employee on business computers and for business applications………………7
2.10 Limit employee access to data and information, and limit authority to install software………………………………………….7
3. Highly Recommended Practices…………………………………………………………………………………………….7
3.1 Security concerns about email attachments and emails requesting sensitive information………………………………………..8
3.2 Security concerns about web links in email, instant messages, social media, or other means………………………………….8
3.3 Security concerns about popup windows and other hacker tricks………………………………………………………………………..8
3.4 Doing online business or banking more securely………………………………………………………………………………………………9
3.5 Recommended personnel practices in hiring employees…………………………………………………………………………………….9
3.6 Security considerations for web surfing…………………………………………………………………………………………………………10
3.7 Issues in downloading software from the Internet…………………………………………………………………………………………..10
3.8 How to get help with information security when you need it…………………………………………………………………………….10
3.9 How to dispose of old computers and media………………………………………………………………………………………………….11
3.10 How to protect against Social Engineering……………………………………………………………………………………………………11
4. Other planning considerations for information, computer, and network security………………………..11
4.1 Contingency and Disaster Recover planning considerations…………………………………………………………………………….12
4.2 Cost-Avoidance considerations in information security…………………………………………………………………………………..12
4.3 Business policies related to information security and other topics……………………………………………………………………..13
Appendix A: Identifying and prioritizing your organization’s information types…………………………….A-1
Appendix B: Identifying the protection needed by your organization’s priority information types……B-1
Appendix C: Estimated costs from bad things happening to your important business information…….C-1
For some small businesses, the security of their information, systems, and networks might not be a high priority, but for their customers, employees, and trading partners it is very important. The term Small Enterprise (or Small Organization) is sometimes used for this same category of business or organization. A small enterprise/organization may also be a nonprofit organization. The size of a small business varies by type of business, but typically is a business or organization with up to 500 employees.1
In the United States, the number of small businesses totals to over 95% of all businesses. The small business community produces around 50% of our nation’s Gross National Product (GNP) and creates around 50% of all new jobs in our country. Small businesses, therefore, are a very important part of our nation’s economy. They are a significant part of our nation’s critical economic and cyber infrastructure.
Larger businesses in the United States have been actively pursuing information security with significant resources including technology, people, and budgets for some years now. As a result, they have become a much more difficult target for hackers and cyber criminals. What we are seeing is that the hackers and cyber criminals are now focusing more of their unwanted attention on less secure small businesses.
Therefore, it is important that each small business appropriately secure their information, systems, and networks.
This Interagency Report (IR) will assist small business management to understand how to provide basic security for their information, systems, and networks.
Why should a small business be interested in, or concerned with information security?
The customers of small businesses have an expectation that their sensitive information will be respected and given adequate and appropriate protection. The employees of a small business also have an expectation that their sensitive personal information will be appropriately protected.
And, in addition to these two groups, current and/or potential business partners also have their expectations of the status of information security in a small business. These business partners want assurance that their information, systems, and networks are not put “at risk” when they connect to and do business with this small business. They expect an appropriate level of security in this actual or potential business partner – similar to the level of security that they have implemented in their own systems and networks.
Some of the information used in your business requires special protection for confidentiality (to ensure that only those who need access to that information to do their jobs actually have access to it). Some of the information used in your business needs protection for integrity (to ensure that the information has not been tampered with or deleted by those who should not have had access to it). Some of the information used in your business needs protection for availability (to ensure that the information is available when it is needed by those who conduct the organization’s business). And, of course, some information used in your business needs protection for more than one of these categories of information security.
1 US Small Business Administration, Table of Small Business Size Standards, http://www.sba.gov/idc/groups/public/documents/sba_homepage/serv_sstd_tablepdf.pdf
Such information might be sensitive employee or customer information, confidential business research or plans, financial information, or information falling under special information categories such as privacy information, health information, or certain types of financial information. Some of these information categories have special, much more restrictive regulatory requirements for specific types of information security protections. Failure to properly protect such information, based on the required protections, can easily result in significant fines and penalties from the regulatory agencies involved.
Just as there is a cost involved in protecting information (for hardware, software, or management controls such as policies & procedures, etc), there is also a cost involved in not protecting information. Those engaged in risk management for a small business are also concerned with cost-avoidance – in this case, avoiding the costs of not protecting sensitive business information.
When we consider cost-avoidance, we need to be aware of those costs that aren’t immediately obvious. Among such costs are the notification laws that many states have passed which require any business, including small businesses, to notify, in a specified manner, all persons whose data might have been exposed in a security breach (hacker incident, malicious code incident, an employee doing an unauthorized release of information, etc). The average estimated cost for these notifications and associated security breach costs is well over $130.00 per person. If you have 1000 customers whose data might have been compromised in an incident, then your minimum cost would be $130,000, per incident. Prevention of identity theft is a goal of these laws/regulations. This should provide motivation to implement adequate security to prevent such incidents. Of course, if there is such an incident then some customers will lose their trust in the affected small business and take their business elsewhere. This is another cost that isn’t immediately obvious, but which is included in the above per-person cost.
Considering viruses and other malicious code (programs); there were over 1.6 million new viruses and other malicious programs detected in 2008 (Symantec – Internet Security Threat Report, April 14, 2009). It is unthinkable to operate a computer without protection from these harmful programs. Many, if not most, of these viruses or malicious code programs are used by organized crime to steal information from computers and make money by selling or illegally using that information for such purposes as identity theft.
It is not possible for a small business to implement a perfect information security program, but it is possible (and reasonable) to implement sufficient security for information, systems, and networks that malicious individuals will go elsewhere to find an easier target. Additional information may be found on the NIST Computer Security web page at: http://csrc.nist.gov.
- The “absolutely necessary” actions that a small business should take to protect its information, systems, and networks.
These practices must be done to provide basic information security for your information, computers, and networks.
2.1 Protect information/systems/networks from damage by viruses, spyware, and other malicious code.
Install, use (in “real-time” mode, if available), and keep regularly updated anti-virus and anti-spyware software on every computer used in your business.
Many commercial software vendors provide adequate protection at a reasonable price and some for free. An internet search for anti-virus and anti-spyware products will show many of these organizations. Most vendors now offer subscriptions to “security service” applications, which provides multiple layers of protection (in addition to anti-virus and anti-spyware protection).
You should be able to set the antivirus software to automatically check for updates at some scheduled time during the night (12 Midnight, for example) and then set it to do a scan soon afterwards (12:30am, for example). Schedule the anti-spyware software to check for updates at 2:30am and to do a full system scan at 3:00am. This assumes that you have an always-on, high-speed connection to the Internet. Regardless of the actual scheduled times for the above updates/scans, schedule them so that only one activity is taking place at any given time.
It is a good idea to obtain copies of your business anti-virus software for your and your employees’ home computers. Most people do some business work at home, so it is important to protect their home systems, too.
2.2 Provide security for your Internet connection.
Most businesses have broadband (high speed) access to the Internet. It is important to keep in mind that this type of Internet access is always “on.” Therefore, your computer – or any network your computer is attached to – is exposed to threats from the Internet on a 24 hour a day/7 day a week basis.
For broadband Internet access, it is critical to install and keep operational a hardware firewall between your internal network and the Internet. This may be a function of a wireless access point/router or may be a function of a router provided by the Internet Service Provider (ISP) of the small business. There are many hardware vendors which provide firewall wireless access points/routers, firewall routers, and firewalls.
Since employees will do some business work at home, ensure that all employees’ home systems are protected by a hardware firewall between their system(s) and the Internet.
For these devices, change the administrative password upon installation and regularly thereafter. It is a good idea to change the administrator’s name as well. The default values are easily guessed, and, if not changed, may allow hackers to control your device and thus, to monitor or record your communications (and data) to/from the Internet.
2.3 Install and activate software firewalls on all your business systems.
Install, use, and keep updated a software firewall on each computer system used in your small business.
If you use the Microsoft Windows operating system, it probably has a firewall included. You have to ensure that the firewall is operating, but it should be available.
To check the software firewall provided with Microsoft Windows XP, click on “Start” then “Settings”, then “Control Panel”, then “Windows Firewall”. Select the “General” tab on the top of the popup window. You can see if the firewall is on or off. If it is off, select “On-Recommended” in the hollow circle next to the green check-mark icon.
To check the software firewall provided with Microsoft Windows Vista, click on “Start” then “Control Panel” then “Windows Firewall.” If your firewall is working, you should see a message that “Windows Firewall is helping to protect your computer.” If not, click on ‘Turn Windows Firewall on or off” (in the upper left corner of the window) and select “Turn on firewall.”
When using other commercial operating systems, ensure that you fully review operations manuals to discover if your system has a firewall included and how it is enabled.
There are commercial software firewalls that you can purchase at a reasonable price or free that you can use with your Windows systems or with other operating systems. Again, internet searches and using online/trade magazine reviews and references can assist in selecting a good solution.
Again, since employees do some business work at home, ensure that employee’s home systems have firewalls installed and operational on them.
It is necessary to have software firewalls on each computer even if you have a hardware firewall protecting your network. If your hardware firewall is compromised by a hacker or by malicious code of some kind, you don’t want the intruder or malicious program to have unlimited access to your computers and the information on those computers.
2.4 Patch your operating systems and applications.
All operating system vendors provide patches and updates to their products to correct security problems and to improve functionality. Microsoft provides monthly patches on the second Tuesday of each month. From time to time, Microsoft will issue an “off schedule” patch to respond to a particularly serious threat. To update any supported version of Windows, go to “Start” and select “Windows Update” or “Microsoft Update.” Follow the prompts to select and install the recommended patches. Other operating system vendors have similar functionality. Ensure that you know how to update and patch any operating system you select. Operating system vendors include: Microsoft (various versions of Windows), Apple (Mac OSX, Snow Leopard), Sun (SunOS, Solaris), and sources of other versions of Unix and Linux. Note: when you purchase new computers, update them immediately. Same for new software installation.
For Microsoft Windows XP, select “Start”, then “Control Panel”, then “System”, then “Automatic Updates”. After that, set the day and time to download and install updates. Select “Apply” and click “OK”.
For Microsoft Windows Vista, select “Start”, then “Control Panel”, then “Security”, then “Turn Automatic Updating on or off”. If the circle is marked which says “Install updates automatically (recommended)”, check to see that the day/time tabs are set to “every day” and “11:00pm” or some other convenient time. If the circle is not marked which says “Install updates automatically (recommended)”, then check the circle to activate automatic updates and select “every day” on the left tab, then select an appropriate time (11:00pm is fine) for the right tab. Then, towards the bottom of the window, check “Recommended Updates” and for “Update Service” check “Use Microsoft Update”. Then click on “OK” at the bottom of the window and you are all set for automatic updates for your Windows Vista system.
Office productivity products such as Microsoft Office also need to be patched & updated on a regular basis. For Microsoft products, the patch/update process is similar to that of the Microsoft Windows operating systems. Other business software products also need to be updated regularly.
2.5 Make backup copies of important business data/information.
Back up your data on each computer used in your business. Your data includes (but is not limited to) word processing documents, electronic spreadsheets, databases, financial files, human resources files, accounts receivable/payable files, and other information used in or generated by your business.
It is necessary to back up your data because computers die, hard disks fail, employees make mistakes, and malicious programs can destroy data on computers. Without data backups, you can easily get into a situation where you have to recreate your business data from paper copies and other manual files.
Do this automatically if possible. Many security software suites offer automated backup functions that will do this on a regular schedule for you. Back up only your data, not the applications themselves (for which you should have distribution CDs from your vendor). This automatic backup should be done at least once a week, and stored on a separate hard disk on your computer if not off line using some form of removable media or online storage. The hard disk should have enough capacity to hold data for 52 weekly backups. The size of the storage device should be about 52 times the amount of data that you have, plus 30% or so). Remember, this should be done on each of your business computers. It is important to periodically test your backed up data to ensure that you can read it reliably. There are “plug and play” products which, when connected to your computer, will automatically search for files and back them up to a removable media, such as an external USB hard disk.
It is important to make a full backup once a month and store it away from your office location in a protected place. If something happens to your office (fire, flood, tornado, theft, etc) then your data is safe in another location and you can restore your business operations using your backup data and replacement computers and other necessary hardware and software. As you test your individual computer backups to ensure they can be read, it is equally important that you test your monthly backups to ensure that you can read them. If you don’t test your backups, you have no grounds for confidence that you will be able to use them in the event of a disaster or contingency.
If you choose to do this monthly backup manually, an easy way is to purchase a form of removable media, such as an external USB hard drive (at least 1000 Gigabytes capacity). On the hard drive, create a separate folder for each of your computers, and create 2 folders in each computer folder – one for each odd numbered month and one for each even numbered month. Bring the external disk into your office on the day that you do your monthly backup. Then, complete the following steps: connect the external disk to your first computer and make your backup by copying your data into the appropriate designated folder; immediately do a test restore of a file or folder into a separate folder on your computer that has been set up for this test (to ensure that you can read the restored file or folder). Repeat this process for each of your business computers and, at the end of the process, disconnect the external drive. At the end of the day, take the backup hard drive to the location where you store your monthly backups. At the end of the year, label and store the hard disk in a safe place, and purchase another one for use in the next year.
It is very important to do this monthly backup for each computer used in your business.
2.6 Control physical access to your computers and network components.
Do not allow unauthorized persons to have physical access to or to use of any of your business computers. This includes locking up laptops when they are not in use. It is a good idea to position each computer’s display (or use a privacy screen) so that people walking by cannot see the information on the screen.
Controlling access to your systems and networks also involves being fully aware of anyone who has access to the systems or networks. This includes cleaning crews who come into the office space at night to clean the trash and office space. Criminals often attempt to get jobs on cleaning crews for the purpose of breaking into computers for the sensitive information that they expect to find there. Controlling access also includes being careful about having computer or network repair personnel working unsupervised on systems or devices. It is easy for them to steal privacy/sensitive information and walk out the door with it without anyone noticing anything unusual.
No one should be able to walk into your office space without being challenged by an employee. This can be done in a pleasant, cordial manner, but it must be done to identify those who do not have a legitimate reason for being in your offices. “How may I help you?” is a pleasant way to challenge an unknown individual.
2.7 Secure your wireless access point and networks.
If you use wireless networking, it is a good idea to set the wireless access point so that it does not broadcast its Service Set Identifier (SSID). Also, it is critical to change the administrative password that was on the device when you received it. It is important to use strong encryption so that your data being transmitted between your computers and the wireless access point cannot be easily intercepted and read by electronic eavesdroppers. The current recommended encryption is WiFi Protected Access 2 (WPA-2) – using the Advanced Encryption Standard (AES) for secure encryption. See your owner’s manual for directions on how to make the above changes. Note that WEP (Wired-Equivalent Privacy) is not considered secure; do not use it for encrypting your wireless traffic.
2.8 Train your employees in basic security principles.
Employees who use any computer programs containing sensitive information should be told about that information and must be taught how to properly use and protect that information. On the first day that your new employees start work, they need to be taught what your information security policies are and what they are expected to do to protect your sensitive business information. They need to be taught what your policies require for their use of your computers, networks, and Internet connections.
In addition, teach them your expectations concerning limited personal use of telephones, printers, and any other business owned or provided resources. After this training, they should be requested to sign a statement that they understand these business policies, that they will follow your policies, and that they understand the penalties for not following your policies. (You will need clearly spelled-out penalties for violation of business policies.)
Set up and teach “rules of behavior” which describe how to handle and protect customer data and other business data. This may include not taking business data home or rules about doing business work on home computers.
Having your employees trained in the fundamentals of information, system, and network security is one of the most effective investments you can make to better secure your business information, systems, and networks. You want to develop a “culture of security” in your employees and in your business.
Typical providers of such security training could be your local Small Business Development Center (SBDC), community college, technical college, or commercial training vendors.
2.9 Require individual user accounts for each employee on business computers and for business applications.
Set up a separate account for each individual and require that good passwords be used for each account. Good passwords consist of a random sequence of letters, numbers, and special characters – and are at least 8 characters long.
To better protect systems and information, ensure that all employees use computer accounts which do not have administrative privileges. This will stop any attempt – automated or not – to install unauthorized software. If an employee uses a computer with an administrative user account, then any malicious code that they activate (deliberately or by deception) will be able to install itself on their computer – since the malicious code will have the same administrative rights as the user account has.
Without individual accounts for each user, you may find it difficult to hold anyone accountable for data loss or unauthorized data manipulation.
Passwords which stay the same, will, over time, be shared and become common knowledge to an individual user’s coworkers. Therefore, passwords should be changed at least every 3 months.
2.10 Limit employee access to data and information, and limit authority to install software.
Use good business practices to protect your information. Do not provide access to all data to any employee. Do not provide access to all systems (financial, personnel, inventory, manufacturing, etc) to any employee. For all employees, provide access to only those systems and only to the specific information that they need to do their jobs.
Do not allow a single individual to both initiate and approve a transaction (financial or otherwise).
The unfortunate truth is that insiders – those who work in a business – are the source of most security incidents in the business. The reason is that they already are inside, they are already trusted, and they have already been given access to important business information and systems. So, when they perform harmful actions (deliberately or otherwise), business information, systems, and networks suffer harm. (and, the business itself suffers harm).
- Highly Recommended Practices
These practices are very important and should be completed immediately after those in Section 2.
3.1 Security concerns about email attachments and emails requesting sensitive information.
For business or personal email, do not open email attachments unless you are expecting the email with the attachment and you trust the sender.
One of the most common means of distributing spyware or malicious code is via email attachments. Usually these threats are attached to emails that pretend to be from someone you know, but the “from” address has been altered and it only appears to be a legitimate message from a person you know.
It is always a good idea to call the individual who “sent” the email and ask them if they sent it and ask them what the attachment is about. Sometimes, a person’s computer is compromised and malicious code becomes installed on it. Then, the malicious code uses the computer to send emails in the name of the owner of the computer to everyone in the computer owner’s email address book. The emails appear to be from the person, but instead are sent by the computer when activated by the malicious code. Those emails typically have copies of the malicious code (with a deceptive file name) as attachments to the email and will attempt to install the malicious code on the computer of anyone who receives the email and opens the attachment.
Beware of emails which ask for sensitive personal or financial information – regardless of who the email appears to be from. No responsible business will ask for sensitive information in an email.
3.2 Security concerns about web links in email, instant messages, social media, or other means.
For business or personal email, do not click on links in email messages. Some scams are in the form of embedded links in emails. Once a recipient clicks on the link, malicious software (for example, viruses or key stroke logging software) is installed on the user’s computer. It is not a good idea to click on links in a Facebook or other social media page.
Don’t do it unless you know what the web link connects to and you trust the person who sent the email to you. It is a good idea to call the individual prior to clicking on a link and ask if they sent the email and what the link is for. Always hold the mouse pointer over the link and look at the bottom of the browser window to ensure that the actual link (displayed there) matches the link description in the message. (the mouse pointer changes from an arrow to a tiny hand when placed over an active link)
3.3 Security concerns about popup windows and other hacker tricks.
When connected to and using the Internet, do not respond to popup windows requesting that you to click “ok” for anything.
If a window pops up on your screen informing you that you have a virus or spyware and suggesting that you download an antivirus or antispyware program to take care of it, close the popup window by selecting the X in the upper right corner of the popup window. Do not respond to popup windows informing you that you have to have a new codec, driver, or special program for something in the web page you are visiting. Close the popup window by selecting the X in the upper right corner of the popup window.
Most of these popup windows are actually trying to trick you into clicking on “OK” to download and install spyware or other malicious code onto your computer.
Hackers are known to scatter infected USB drives with provocative labels in public places where their target business’s employees hang out, knowing that curious individuals will pick them up and take them back to their office system to “see what’s on them.” What is on them is generally malicious code which installs a spy program or remote control program on the computer. Teach your employees to not bring USB drives into the office and plug them into your business computers (or to take them home and plug into their home systems). It is a good idea to disable the “AutoRun” feature for the USB ports on your business computers to help prevent such malicious programs from installing on your systems.
3.4 Doing online business or banking more securely.
Online business/commerce/banking should only be done using a secure browser connection. This will normally be indicated by a small lock visible in the lower right corner of your web browser window.
After any online commerce or banking session, erase your web browser cache, temporary internet files, cookies, and history so that if your system is compromised, that information will not be on your system to be stolen by the individual hacker or malware program.
If you use Microsoft Internet Explorer as your web browser, erase the web browser cache, temporary internet files, cookies, and browsing history by selecting “Tools,” then “Options,” then under the General tab, click on “Delete” (under Browsing History). This will erase your temporary files, history, cookies, saved passwords, and web form information. (These instructions are for Internet Explorer version 7.0 – instructions for other versions of Internet Explorer may be slightly different)
If you use Mozilla Firefox as your web browser, erase the erase the web browser cache, temporary internet files, cookies, and browsing history by selecting “Tools,” then clicking on “Clear Private Data” towards the bottom of the popup window. To continue clearing information, click on “Tools”, then “Options,” then under the Privacy tab, select Show Cookies, then select “Remove All Cookies.” This will erase your session information. (These instructions are for Firefox version 3.0 – instructions for other versions of Firefox may be slightly different)
3.5 Recommended personnel practices in hiring employees.
When hiring new employees, conduct a comprehensive background check before making a job offer.
You should consider doing criminal background checks on all prospective new employees. Online background checks are quick and relatively inexpensive. Do a full, nationwide, background check. You can’t afford to hire someone who has a history of past criminal behavior. In some areas, the local police department provides a computer on which to request a background check. In some areas, this service is free to you. If possible, it is a good idea to do a credit check on prospective employees. This is especially true if they will be handling your business funds. And, do the rest of your homework – call their references and former employers.
If there are specific educational requirements for the job that they have applied for, call the schools they attended and verify their actual degree(s), date(s) of graduation, and GPA(s).
In considering doing background checks of potential employees, it is also an excellent idea for you to do a background check of yourself. Many people become aware that they are victims of identity theft only after they do a background check on themselves and find arrest records and unusual previous addresses where they never lived. (Some people become aware only after they are pulled over for a routine traffic stop and then arrested because the officer is notified of an outstanding arrest warrant for them)
3.6 Security considerations for web surfing.
No one should surf the web using a user account which has administrative privileges.
If you do surf the web using an administrative user account, then any malicious code that you happen across on the Internet may be able to install itself on your computer – since the malicious code will have the same administrative rights as your user account has. It is best to set up a special account with “guest” (limited) privileges to avoid this vulnerability.
3.7 Issues in downloading software from the Internet.
Do not download software from any unknown web page.
Only those web pages belonging to businesses with which you have a trusted business relationship should be considered reasonably safe for downloading software. Such trusted sites would include the Microsoft Update web page where you would get patches and updates for various versions of the Windows operating system and Microsoft Office or other similar software. Most other web pages should be viewed with suspicion.
Be very careful if you decide to use freeware or shareware from a source on the web. Most of these do not come with technical support and some are deliberately crippled so that you do not have the full functionality you might be led to believe will be provided.
3.8 How to get help with information security when you need it.
No one is an expert in every business and technical area. Therefore, when you need specialized expertise in information/computer/network security, get help. Ask your SBDC or Service Corps of Retired Executives (SCORE – usually co-located with your local SBDC office) Office for advice and recommendations. You might consider your local Chamber of Commerce, Better Business Bureau, community college, and/or technical college as a source of referrals for potential providers. For information on identity theft, go to: http://www.ftc.gov/bcp/edu/microsites/idtheft/ – this is the Federal Trade Commission’s web page.
When you get a list of service providers, prepare a request for quotes and send it out as a set of actions or outcomes that you want to receive. Carefully examine and review the quote from each firm responding to your request. Research each firm’s past performance and check its references carefully. Request a list of past customers and contact each one to see if the customer was satisfied with the firms’ performance and would hire the firm again for future work. Find out who – on the firm’s professional staff – will be doing your work. Ask for their professional qualifications for doing your work. Find out how long the firm has been in business (Because you probably don’t want a firm which set up shop last week).
3.9 How to dispose of old computers and media.
When disposing of old business computers, remove the hard disks and destroy them. The destruction can be done by taking apart the disk and beating the hard disk platters with a hammer. You could also use a drill with a long drill bit and drill several holes through the hard disk and through the recording platters. Remember to destroy the electronics and connectors as part of this project. You can also take your hard disks to companies who specialize in destroying storage devices such as hard disks.
When disposing of old media (CDs, floppy disks, USB drives, etc), destroy any containing sensitive business or personal data. Media also includes paper. When disposing of paper containing sensitive information, destroy it by using a crosscut shredder. Incinerate paper containing very sensitive information.
It is very common for small businesses to discard old computers and media without destroying the computers’ hard disks or the media. Sensitive business and personal information is regularly found on computers purchased on Ebay, thrift shops, Goodwill, etc, much to the embarrassment of the small businesses involved (and much to the annoyance of customers or employees whose sensitive data is compromised). This is a practice which can result in identity theft for the individuals whose information is retrieved from those systems. Destroy hard disks & media and recycle everything else.
3.10 How to protect against Social Engineering.
Social engineering is a personal or electronic attempt to obtain unauthorized information or access to systems/facilities or sensitive areas by manipulating people.
The social engineer researches the organization to learn names, titles, responsibilities, and publically available personal identification information. Then the social engineer usually calls the organization’s receptionist or help desk with a believable, but made-up story designed to convince the person that the social engineer is someone in, or associated with, the organization and needs information or system access which the organization’s employee can provide and will feel obligated to provide.
To protect against social engineering techniques, employees must be taught to be helpful, but vigilant when someone calls in for help and asks for information or special system access. The employee must first authenticate the caller by asking for identification information that only the person who is in or associated with the organization would know. If the individual is not able to provide such information, then the employee should politely, but firmly refuse to provide what has been requested by the social engineer.
The employee should then notify management of the attempt to obtain information or system access.
- Other planning considerations for information, computer, and network security.
In addition to the operational guidelines provided above, there are other considerations that a small business needs to understand and address.
4.1 Contingency and Disaster Recover planning considerations
What happens if there is a disaster (flood, fire, tornado, etc) or a contingency (power outage, sewer backup, accidental sprinkler activation, etc)? Do you have a plan for restoring business operations during or after a disaster or a contingency? Since we all experience power outages or brownouts from time to time, do you have Uninterruptible Power Supplies (UPS) on each of your computers and critical network components? They allow you to work through short power outages and to save your data when the electricity goes off.
Have you done an inventory of all information used in running your business? Do you know where each type of information is located (on which computer or server)? Have you prioritized your business information so that you know which type of information is most critical to the operation of your business – and, therefore, which type of information must be restored first in order to run your most critical operations? If you have never (or not recently) done a full inventory of your important business information, now is the time. For a very small business, this shouldn’t take longer than a few hours. For a larger small business, this might take from a day to a week or so. (See Appendix A for a worksheet template for such an inventory.)
After you complete this inventory, ensure that the information is prioritized relative to importance for the entire business, not necessarily for a single part of the business. When you have your prioritized information inventory (on an electronic spreadsheet), add three columns to address the kind of protection that each type of information needs. Some information will need protection for confidentiality, some for integrity, and some for availability. Some might need all three types of protection. (See Appendix B for a worksheet template for this information.)
This list will be very handy when you start to decide how to implement security for your important information and where to spend your scarce resources to protect your important information. No one has enough resources to protect every type of information in the best possible way, so you start with the highest priority information, protecting each successive priority level until you run out of resources. Using this method, you will get the most “bang for your buck” for protecting your important information.
In the event of a security incident which results in “lost” data because of malicious code, hackers, or employee misconduct, establish procedures to report incidents to employees and/or customers. Most states have notification laws requiring specific notifications to affected customers.
4.2 Cost-Avoidance considerations in information security.
In Section 1 (Introduction), we discussed cost avoidance factors. It is important to have an idea of how much loss exposure that your business has if something bad happens to your information.
Something “bad” might involve a loss of confidentiality. Perhaps a virus or other malicious program compromises one of your computers and steals a copy of your business’ sensitive information (perhaps employee health information, employee personally identifiable information, or customer financial information). Such a loss could easily result in identity theft for employees or customers. It’s not unusual for business owners or managers to be unaware of the financial risk to the business in such situations.
Appendix C contains a worksheet which is a template to generate financial exposure amounts for different scenarios of data/information incidents. This worksheet should be filled out for each data type used in your business, from the highest priority to the lowest priority.
It is important to understand that there is a real cost associated with not providing adequate protection to sensitive business information and that this cost is usually invisible until something bad happens. Then it becomes all too real (and all too expensive) and visible.
4.3 Business policies related to information security and other topics.
Every business needs written policies to identify acceptable practices and expectations for business operations.
Some policies will be related to human resources, others will relate to expected employee practices for using business resources, such as telephones, computers, printers, fax machines, and Internet access. This is not an exhaustive list and the range of potential policies is largely determined by the type of business and the degree of control and accountability desired by management. Legal and regulatory requirements may also require certain policies to be put in place and enforced.
Policies for information, computer, network, and Internet security, should communicate clearly to employees the expectations that the business management has for appropriate use. These policies should identify those information and other resources which are important to management and should clearly describe how management expects those resources to be used and protected by all employees.
For example, for sensitive employee information a typical policy statement might say, “All employee personnel data shall be protected from viewing or changing by unauthorized persons.” This policy statement identifies a particular type of information and then describes the protection expected to be provided for that information.
Policies should be communicated clearly to each employee and all employees should sign a statement agreeing that they have read the policies, that they will follow the policies, and that they understand the possible penalties for violating those policies. This will help management to hold employees accountable for violation of the businesses policies. As noted, there should be penalties for disregarding business policies. And, those penalties should be enforced fairly and consistently for everyone in the business that violates the policies of the business.
To sum it all up: Implementing the best practices described in this publication will help your business cost-avoidance efforts and will be useful as a tool to market your business as one in which the safety and security of your customer’s information is of highest importance.
Appendix A: Identifying and prioritizing your organization’s information types
1. Think about the information used in/by your organization. Make a list of all the information types used in your organization. (define “information type” in any useful way that makes sense to your business)
2. Then list and prioritize the 5 most important types of information used in your organization. Enter them into the table below.
3. Identify the system on which each information type is located.
4. Finally, create a complete table for all your business information types – in priority order.
Table 1 The 5 Highest Priority Information Types In My Organization
Use this area as your “scratch pad”
(Once you finish this exercise, fill out a full table for all your important business information)
Appendix B: Identifying the protection needed by your organization’s priority information types
1. Think about the information used in/by your organization.
2. Enter the 5 highest priority information types in your organization into the table below.
3. Enter the protection required for each information type in the columns to the right.
(C – Confidentiality; I – Integrity; A – Availability) <”Y”-needed; “N”-not needed>
4. Finally, finish a complete table for all your business information types.
(Note: this would usually be done by adding three columns to Table 1)
Table 2 The Protection Needed By The 5 Highest Priority Information Types In My Organization
Appendix C: Estimated costs from bad things happening to your important business information
1. Think about the information used in/by your organization.
2. Enter into the table below your highest priority information type.
3. Enter estimated costs for each of the categories on the left.
If it isn’t applicable, please enter NA. Total the costs in each column in the bottom cell.
4. After doing the above three steps, finish a complete table for all your information types.
Table 3 The Highest Priority Information Type In My Organization and an estimated cost associated with specified bad things happening to it.
Opinion: The unspoken truth about managing geeks
September 8, 2009 (Computerworld) I can sum up every article, book and column written by notable management experts about managing IT in two sentences: “Geeks are smart and creative, but they are also egocentric, antisocial, managerially and business-challenged, victim-prone, bullheaded and credit-whoring. To overcome these intractable behavioral deficits you must do X, Y and Z.”
X, Y and Z are variable and usually contradictory between one expert and the next, but the patronizing stereotypes remain constant. I’m not entirely sure that is helpful. So, using the familiar brush, allow me to paint a different picture of those IT pros buried somewhere in your organization.
My career has been stippled with a good bit of disaster recovery consulting, which has led me to deal with dozens of organizations on their worst day, when opinions were pretty raw. I’ve heard all of the above-mentioned stereotypes and far worse, as well as good bit of rage. The worse shape an organization is in, the more you hear the stereotypes thrown around. But my personal experiences working within IT groups have always been quite good, working with IT pros for whom the negative stereotypes just don’t seem to apply. I tended to chalk up IT group failures to some bad luck in hiring and the delicate balance of those geek stereotypes.
Recently, though, I have come to realize that perfectly healthy groups with solid, well-adjusted IT pros can and will devolve, slowly and quietly, into the behaviors that give rise to the stereotypes, given the right set of conditions. It turns out that it is the conditions that are stereotypical, and the IT pros tend to react to those conditions in logical ways. To say it a different way, organizations actively elicit these stereotypical negative behaviors.
Understanding why IT pros appear to act the way they do makes working with, among and as one of them the easiest job in the world.
It’s all about respect
Few people notice this, but for IT groups respect is the currency of the realm. IT pros do not squander this currency. Those whom they do not believe are worthy of their respect might instead be treated to professional courtesy, a friendly demeanor or the acceptance of authority. Gaining respect is not a matter of being the boss and has nothing to do with being likeable or sociable; whether you talk, eat or smell right; or any measure that isn’t directly related to the work. The amount of respect an IT pro pays someone is a measure of how tolerable that person is when it comes to getting things done, including the elegance and practicality of his solutions and suggestions. IT pros always and without fail, quietly self-organize around those who make the work easier, while shunning those who make the work harder, independent of the organizational chart.
This self-ordering behavior occurs naturally in the IT world because it is populated by people skilled in creative analysis and ordered reasoning. Doctors are a close parallel. The stakes may be higher in medicine, but the work in both fields requires a technical expertise that can’t be faked and a proficiency that can only be measured by qualified peers. I think every good IT pro on the planet idolizes Dr. House (minus the addictions).
While everyone would like to work for a nice person who is always right, IT pros will prefer a jerk who is always right over a nice person who is always wrong. Wrong creates unnecessary work, impossible situations and major failures. Wrong is evil, and it must be defeated. Capacity for technical reasoning trumps all other professional factors, period.
Foundational (bottom-up) respect is not only the largest single determining factor in the success of an IT team, but the most ignored. I believe you can predict success or failure of an IT group simply by assessing the amount of mutual respect within it.
The elements of the stereotypes
Ego — Similar to what good doctors do, IT pros figure out that the proper projection of ego engenders trust and reduces apprehension. Because IT pros’ education does not emphasize how to deal with people, there are always rough edges. Ego, as it plays out in IT, is an essential confidence combined with a not-so-subtle cynicism. It’s not about being right for the sake of being right but being right for the sake of saving a lot of time, effort, money and credibility. IT is a team sport, so being right or wrong impacts other members of the group in non-trivial ways. Unlike in many industries, in IT, colleagues can significantly influence the careers of the entire team. Correctness yields respect, respect builds good teams, and good teams build trust and maintain credibility through a healthy projection of ego. Strong IT groups view correctness as a virtue, and certitude as a delivery method. Meek IT groups, beaten down by inconsistent policies and a lack of structural support, are simply ineffective at driving change and creating efficiencies, getting mowed over by the clients, the management or both at every turn.
The victim mentality — IT pros are sensitive to logic — that’s what you pay them for. When things don’t add up, they are prone to express their opinions on the matter, and the level of response will be proportional to the absurdity of the event. The more things that occur that make no sense, the more cynical IT pros will become. Standard organizational politics often run afoul of this, so IT pros can come to be seen as whiny or as having a victim mentality. Presuming this is a trait that must be disciplined out of them is a huge management mistake. IT pros complain primarily about logic, and primarily to people they respect. If you are dismissive of complaints, fail to recognize an illogical event or behave in deceptive ways, IT pros will likely stop complaining to you. You might mistake this as a behavioral improvement, when it’s actually a show of disrespect. It means you are no longer worth talking to, which leads to insubordination.
Insubordination — This is a tricky one. Good IT pros are not anti-bureaucracy, as many observers think. They are anti-stupidity. The difference is both subjective and subtle. Good IT pros, whether they are expected to or not, have to operate and make decisions with little supervision. So when the rules are loose and logical and supervision is results-oriented, supportive and helpful to the process, IT pros are loyal, open, engaged and downright sociable. Arbitrary or micro-management, illogical decisions, inconsistent policies, the creation of unnecessary work and exclusionary practices will elicit a quiet, subversive, almost vicious attitude from otherwise excellent IT staff. Interestingly, IT groups don’t fall apart in this mode. From the outside, nothing looks to be wrong and the work still gets done. But internally, the IT group, or portions of it, may cut themselves off almost entirely from the intended management structure. They may work on big projects or steer the group entirely from the shadows while diverting the attention of supervisors to lesser topics. They believe they are protecting the organization, as well as their own credibility — and they are often correct.
Credit whoring — IT pros would prefer to make a good decision than to get credit for it. What will make them seek credit is the danger that a member of the group or management who is dangerous to the process might receive the credit for the work instead. That is insulting. If you’ve got a lot of credit whores in your IT group, there are bigger problems causing it.
Antisocial behavior — It’s fair to say that there is a large contingent of IT pros who are socially unskilled. However, this doesn’t mean those IT pros are antisocial. On the whole, they have plenty to say. If you want to get your IT pros more involved, you should deal with the problems laid out above and then train your other staff how to deal with IT. Users need to be reminded a few things, including:
- IT wants to help me.
- I should keep an open mind.
- IT is not my personal tech adviser, nor is my work computer my personal computer.
- IT people have lives and other interests.
Like anyone else, IT people tend to socialize with people who respect them. They’ll stop going to the company picnic if it becomes an occasion for everyone to list all the computer problems they never bothered to mention before.
How we elicit the stereotypes
What executives often fail to recognize is that every decision made that impacts IT is a technical decision. Not just some of the decisions, and not just the details of the decision, but every decision, bar none.
With IT, you cannot separate the technical aspects from the business aspects. They are one and the same, each constrained by the other and both constrained by creativity. Creativity is the most valuable asset of an IT group, and failing to promote it can cost an organization literally millions of dollars.
Most IT pros support an organization that is not involved with IT. The primary task of any IT group is to teach people how to work. That’s may sound authoritarian, but it’s not. IT’s job at the most fundamental level is to build, maintain and improve frameworks within which to accomplish tasks. You may not view a Web server as a framework to accomplish tasks, but it does automate the processes of advertising, sales, informing and entertaining, all of which would otherwise be done in other ways. IT groups literally teach and reteach the world how to work. That’s the job.
When you understand the mission of IT, it isn’t hard to see why co-workers and supervisors are judged severely according to their abilities to contribute to that process. If someone has to constantly be taught Computers 101 every time a new problem presents itself, he can’t contribute in the most fundamental way. It is one thing to deal with that from a co-worker, but quite another if the people who represent IT to the organization at large aren’t cognizant of how the technology works, can’t communicate it in the manner the IT group needs it communicated, can’t maintain consistency, take credit for the work of the group members, etc. This creates a huge morale problem for the group. Executives expect expert advice from the top IT person, but they have no way of knowing when they aren’t getting it. Therein lies the problem.
IT pros know when this is happening, and they find that it is impossible to draw attention to it. Once their work is impeded by the problem, they will adopt strategies and behaviors that help circumvent the issue. That is not a sustainable state, but how long it takes to deteriorate can be days, months or even years.
How to fix it
So, if you want to have a really happy, healthy and valuable IT group, I recommend one thing: Take an interest. IT pros work their butts off for people they respect, so you need to give them every reason to afford you some.
You can start with the hiring process. When hiring an IT pro, imagine you’re recruiting a doctor. And if you’re hiring a CIO, think of employing a chief of medicine. The chief of medicine should have many qualifications, but first and foremost, he should be a practicing doctor. Who decides if a doctor is a doctor? Other doctors! So, if your IT group isn’t at the table for the hiring process of their bosses and peers, this already does a disservice to the process.
Favor technical competence and leadership skills. Standard managerial processes are nearly useless in an IT group. As I mentioned, if you’ve managed to hire well in the lower ranks of your IT group, the staff already know how to manage things. Unlike in many industries, the fight in most IT groups is in how to get things done, not how to avoid work. IT pros will self-organize, disrupt and subvert in the name of accomplishing work. An over-structured, micro-managing, technically deficient runt, no matter how polished, who’s thrown into the mix for the sake of management will get a response from the professional IT group that’s similar to anyone’s response to a five-year-old tugging his pants leg.
What IT pros want in a manager is a technical sounding board and a source of general direction. Leadership and technical competence are qualities to look for in every member of the team. If you need someone to keep track of where projects are, file paperwork, produce reports and do customer relations, hire some assistants for a lot less money.
When it comes to performance checks, yearly reviews are worthless without a 360-degree assessment. Those things take more time than a simple top-down review, but it is time well spent. If you’ve been paying attention to what I’ve been telling you about how IT groups behave and organize, then you will see your IT group in a whole different light when you read the group’s 360s.
And make sure all your managers are practicing and learning. It is very easy to slip behind the curve in those positions, but just as with doctors, the only way to be relevant is to practice and maintain an expertise. In IT, six months to a year is all that stands between respect and irrelevance.
Finally, executives should have multiple in-points to the IT team. If the IT team is singing out of tune, it is worth investigating the reasons. But you’ll never even know if that’s the case if the only information you receive is from the CIO. Periodically, bring a few key IT brains to the boardroom to observe the problems of the organization at large, even about things outside of the IT world, if only to make use of their exquisitely refined BS detectors. A good IT pro is trained in how to accomplish work; their skills are not necessarily limited to computing. In fact, the best business decision-makers I know are IT people who aren’t even managers.
As I said at the very beginning, it’s all about respect. If you can identify and cultivate those individuals and processes that earn genuine respect from IT pros, you’ll have a great IT team. Taking an honest interest in helping your IT group help you is probably the smartest business move an organization can make. It also makes for happy, completely non-geek-like geeks.
Jeff Ello is a hybrid veteran of the IT and CG industries, currently managing IT for the Krannert School of Management at Purdue University. He can be contacted at email@example.com.
Vendors are finally releasing patches today for the TCP vulnerabilities first publicized nearly a year ago that affect a huge range of networking products, including any device running a version of Cisco’s IOS software, and a number of Microsoft server and desktop operating systems. Both Microsoft and Cisco released fixes for the vulnerabilities on Tuesday.
This is a Denial of Service issue and not a remote takeover. Basically somebody can hang your mahcine if they want to whenever they want to. This has been known since 2005 but mitigating this is going to take more time(in years) for all of hte old, non-supported/updated hardware to be phased out. This is a minor issue at the worst.
Some of these are the BIG BOYS in the market…:)
Get your mouse clickers ready and update Windows as this is a nasty bunch of updates. More than half of the patches are of the type that allow remote system takeover and most of the takeover udpates are in…..Drun Roll Please…..ACTIVEX!
It is going to be either Astaro or Untangle. Depending on what the clients needs are. Ipfire feels kludgey and ipcop isn’t really designed for modern hardware and is a bit too basic for my needs. Comixwall is a typical Debian distro..if you want to live in the cli go this route..:)
According to the Centos homepage:
The CentOS Development team had a routine meeting today with Lance Davis in attendance. During the meeting a majority of issues were resolved immediately and a working agreement was reached with deadlines for remaining unresolved issues. There should be no impact to any CentOS users going forward.
The CentOS project is now in control of the CentOS.org and CentOS.info domains and owns all trademarks, materials, and artwork in the CentOS distributions.
We look forward to working with Lance to quickly complete all the agreed upon issues.
More information will follow soon.
Last Update: August 1, 2009 04:34 UTC by Donavan Nelson
This is nice but it does not answer the questions they raised in public. This also does not tell me they are going to be more transparent which could lead to another similar issue in the future. Centos, What has actually been done to resolve these issues?
Right now I still cannot put my full trust in the long term stability or viability of this project.
NO clients of mine that have run Firefox instead of IE and followed my best practices advice have gotten infected with malware. If you don’t get infected online baking is perfectly safe from your end.
For the clients running centos I am going to be researching other alternatives. Right now no servers are in danger of being unable to update. I will keep everyone informed as to how this situation unfolds.
Read this site:
It turns out the CentOS project is under the control of one person and that person has decided to disappear..for over a year. All monies that got donated did not go to CentOS but to the founding individual. This type of thing can happen anywhere but this type of thing is what gives anti-open source folks tons of ammunition. They may have to rename the project or merge with another one. I will be watching developments as they unfold. I personally am now researching other distros to migrate to since I can no longer be assured of the stability or longevity of CentOS.
I have been following the mailing list as well you can find the mailing list entries here.
the following was posted on the sidebar of the centos homepage:
I have relocated this site to a new server. It is actually a virtual machine on a physical server. I have noticed a 50% increase in performance of this site. I hope you enjoy the new speeds.
I have always thought Ubuntu’s way of locking out direct root access was nonsensical.Â It now turns out it worse than that..it’s Microsoft-ish.
I think it’s beyond open Solaris I think it’s also Solaris as well as Mysql and Virtualbox.
They also hit an outfit called SSANZ.
Full-Disclosure is not meant to make money.Â If full disclosure did not exist many security issues would never be known by anyone except the bad buys.Â Companies like Microsoft would never have had to refocus on the lack of security of their products and would never have made the improvments they have.
Some vendors are only moved by full disclosure…some will move if you contact them first.Â Others like Microsoft only would move if you disclosed publicly first.Â I think full disclosure is a good thing and has enchanced the overall security of the entire software industry.Â Just because some “security” vendors have misused full disclosure to profit does not mean full disclosure is a total bad thing.Â I think anti-sec is on the wrong side of the wall here.
It’s very hard to keep up with things moving as fast as they are. I can relay them verbally faster than typing sometimes. Watch this space for some updates.
It’s about time.Â I have been trying to kill U3 forever among other autorun enabled garbage.Â ALL ECC managed network will get this patch pushed out to them.
There are so many out there it’s hard to sift through.Â I have narrowed things down to 4:
Depending on capabilities and budget those are the 4 ECC is going to be choosing from for the foreseeable future.Â This makes it easier for my clients to know what i am offering and makes it easier for me since i’m not suporting 10 different server packages.Â I intend on narrowing it down to 2 in the near future.
Now this is interesting.Â Comixwall looks like what i ahve been looking for.Â I have been wanting to find a UTM that takes what i had with ipcop/zerina/cop+/copfilter and have it integrated into one package…and be free.Â Astaro is very nice but it’s free only for home use.Â Untangle is nice but it’s feature set is compromised by their wanting to complete with astaro so the free content filter and free anti-spam are compromised in their effectiveness unless yiou go commerical.Â Comixwall is totally free and so far looks promising..more to come.
Their arrogance is partially justified due to superior design.Â The Unix system they use is multi-user by design…unlike windows which is not truly multi user.Â 90% of everything a user is going to do inÂ Unix system is done inside userspace..so unless the user intentionally runs as root(which mac does not make easy) then the virus can really only damage your user space.Â Criminals will go after the easiest target..right now even with vista’s design improvements it’s still based on 20 year old NT code which has some serious deficiencies that make it a very easy traget.Â The biggest and the one that continues to get leveraged is IE via ActiveX.Â Â Â The fact it has the largest installed base doesn’t hurt either.Â The author mentions the worm found for Macs. It doesn’t compromise the entire system and is relatively harmless.Â The fearmongering of sophos is quite evident in their posting about this worm.Â It’s a low grade threat that does contact harvesting.Â No big deal.Â Let me give you an example.Â Do you know what the largest installed base across Linux and windows is for web servers….Apache.Â It’s open source, modular and designed with security in mind.Â Getting apache to compromise apache..that happens yes it does..but it’s typically limited to apache’s userspace because apache doesn’t run as root.Â IIS runs as SYSTEM in many cases which is a lower level of access than administrator.Â For apache..you compromise apache it’s mostly only apache that’s hosed..you compromise IIS you have a direct conduit to the kernel via system most times..same for IE.Â Apple and the Unix guys have a good reason for their smugness.Â They don’t rely on patched up 20 year old code that tries to masquerade as a secure multi-user operating system..they actually do run one that’s designed that way from the beginning.
I found this interesting…enjoy.
This is a non-issue for the majority of folks and the “solution” is only a stopgap. SSL itself..WAS NOT CRACKED only the digital signature algorithm. The proposed solution is to move to SHA-1 which has been broken since 2005 is only a stopgap at best. This is a configuration issue. ALL browsers inherently trust groups of entities as trusted so their certificates are automatically trusted. If this is removed and folks are forced to inspect the certificates this would mitigate this attack..except most folks won’t check the certs..Most of the users are lazy and therefore this convenience is added. This convenience is the reason this attack can succeed. Moving to an already broken hashing method to fix another broken hashing method to fix what is inherently a configuration problem based on laziness isn’t the fix.
This is one thing that is probably going to make national news..panic a ton of folks..and the techie community is going to stampede to SHA-1 which has been broken since 2005. I personally do not know of another hashing algorithm that’s unbroken as of yet.Â SHA-1 still takes quite a bit of computational power to use it’s attack vector but it’s well within modern COTS beowulf clusters now in operation.Â Since the XBOX360 has 3 cores and the PS3 has effectively 8 the amount of hardware needed to compromised SHA-1 is much less than 2005 due to increased computational power.Â It won’t be long before we hear of a similar type of attack on SHA-1 either.
What does this mean to the average person?Â Not much.Â How can this be mitigated?Â Have the browser manufacturers remove their trusted CA pools and at least make the clients have to click thought the certs.Â Inspecting them is not hard really..you just have to read.Â If the user doesn’t take the time to read and inspect the cert then it’s nobody else’s fault if they get nailed.
Now this is itneresting.Â Read on for full details.
Welcome to the ECC Blog!Â This is where you will find various postings I have deemed important or helpful.Â If you find something of interest please use the contact form to alert me.
<a href=”http://www.theinquirer.net/gb/inquirer/news/2008/06/25/vole-extends-support-life-xp”>Microsoft extends support life of XP </a>
This is no news. The 2014 expiration has been <a href=”http://support.microsoft.com/lifecycle/?p1=3223″ target=”_blank”>part of the plan for a long time</a>. Look at the updated on page…2005.Â unless the author has updated information this is nothing new.
I also have a huge issue with <a href=”http://www.twit.tv/148″ target=”_blank”>Leo Laporte and the Twit’s this week</a>.Â About 32 minutes in Leo and the others say the Vista has a perception problem. It’s not a perception problem. Granted many of the folks who are trashing Vista are simply regurtitating what others say. I can tell you right now that Vista is a pig. I have said it before <a href=”http://www.hescominsoon.com/?p=category/technical/windows/vista” target=”_blank”>elsewhere here</a>.Â I have tried it on several so far..some old..some a couple years old and a couple brand new and Vista is a pig no matter what.Â The Twit’s say it’s normal for the next generation to require more resources…true..but more than double?Â Maybe for Microsoft. That’s insane. I can install the current versions of Linux, put them on a 5 year old machine with less than half the resources, imitate the desktop aero effects AND EXCEED THEM plus run multimedia, office programs and various ofther programs AND STILL outrun xp and make Vista look like the slug it is. Keep in mind Apple with OSX has been doing the same vista aero effects 5 years earlier than vista WITH INTEGRATED GRAPHICS OF THE TIME. Hey Wil Vista does suck AND it is a car wreck. I’m still hitting network performance problems with Vista that SP1 suppposedly fixed. On a GIGABIT network i can’t get 100 megabits..that’s not even 10% performance. There are some things positive and Leo points them out, the better search, UAC while irritating does work. If you don’t have to buy vista don’t as windows is moving Windows 7(Vista 2.0) <a href=”http://www.channelregister.co.uk/2008/06/24/xp_vista_windows_roadmap/” target=”_blank”>up to 2010</a>.
MS and other closed source vendors like to say because their source is closed they are more secure. In other words the fact that others do not have access to the code any zero day vulns will be kept under wraps and not pose a danger to the userbase. Listen to this podcast to find out how MS figured out that logic is completely wrong and how dangerous this philosophy is.
Bryan J Smith devles into the MS file systems and compares then against Linux/Unix file sytems and digs out why MS needs defragmenting but Linux does not.
*UPDATE* Bryan has updaed the post with more XFS and clarifications
*NOTE*” If i forgot to trackback you and i used your post let me know and i will correct it as soon as possible. Everyone i have linked to deserves proper credit..:)
Cicso and ISS have created quite a mess for themselves.
First, Mike Lynn showed at the BlackHat conference how to get the equivalent of root on all cisco routers using hte ipv6 modules. Cisco suddenly balked and leaned on ISS. ISS told Lynn to not disclose so he quit and did it anyway. Now Cisco has a settlement with Lynn that means lynn has to dump all of his research in this area. Also ISS has gotten the FBI involved. To top things off, Cisco/ISS are now sending Cease and desist orders to anyone who hosts the presentation photos. A huge amount of links follows and this will be updated as long as it continues to be updated.
Tom’s Networking: Owning IOS at Black Hat 2005
Schneier on Security(Huge Roundup): Cisco Harasses Security Researcher
Wired: Router Flaw Is a Ticking Bomb (* note has an interview with Lynn)
BoingBoing’s original post
Search Security: Security researcher causes furor by releasing flaw in Cisco Systems IOS
Wired: Cisco Security Hole a Whopper
Wall Street Journal Online: Cisco Tries to Squelch Claim About a Flaw In Its Internet Routers
Now the coverup begins:
SecurityFocus.com: Cisco, ISS file suit against rogue researcher
ZDNET UK: Cisco tries to silence researcher
ComputerWorld.com: Furor over Cisco IOS router exploit erupts at Black Hat
Tom’s Hardware: Cisco Behaving Badly
Repurcussions begin to show themselves:
News.com: Flaw researcher settles dispute with Cisco
Makezine.com: Video of Cisco/ISS ripping out pages from printed conference books…
News.com: Cisco hits back at flaw researcher
BBC News: Cisco acts to silence researcher
Metathoughts: Audio of a Press Conference at BlackHat USA 2005 over Cisco and Michael Lynn.
Wired.com: Whistle-Blower Faces FBI Probe
Attempts to silence backfire:
SecurityFocus.com: Exploit writers team up to target Cisco routers
Here is Lynn’s Attorney’s blog that has her view of things:
Bruce Scheiner has more information and even more links.