Jump to content

Security and Privacy in a Networked World/Operating systems

From Wikiversity

NOTE: This and a couple of the following, more technical topics make use of Wikipedia articles to provide the basic understanding of the matters (e.g. operating systems in this topic). These articles deal with technology and are not much disputed, having reached the common knowledge stage. They also provide some good links for further study under the reference sections.


Operating system

[edit]

A computer without any software is only good as a decoration. Software is needed for word processing, web surfing, graphics and many other things. Yet, another thing is needed - the 'middleman' between these programs and the computer that also controls different components of the computer. The middleman is known as the operating system (OS in short).

So the OS has two main roles

  • mediator between the computer and the user. If we had to issue commands directly to the computer, we should use electric signals and binary code. Today's operating systems have come a long way - the early ones had to be controlled by a sophisticated command system, most modern ones sport a graphical user interface (GUI), recent systems also support touch-activated displays.
  • controller of different devices. Clicking the Print button of our web browser sends the command to the OS which in turn instructs the printer what and how should be printed. Likewise, the OS controls scanning the images and displaying them on screen via the graphics editor, sending e-mail through the network interface, playing music via the sound card and speakers, and so on.


For a good overview, read the Wikipedia article on OS: http://en.wikipedia.org/wiki/Operating_system

NB! Those with less technical background will likely find the "Components" section a tad too technical - grasping all the concepts there is not needed for this course. Yet, this kind of knowledge will not hurt either - so take it easy, but learn as much as your background allows.


Some remarks

[edit]
  • One of the first actual OS-es was released by IBM in 1964 - the OS/360 (later OS/390) that ran on the zSeries mainframe family. Interestingly enough, the family has come up to these days - the most recent addition was the zEnterprise BC12 from July 2013. All the long line shares full backward compatibility, meaning that in principle, software from e.g. 1970 should run on the 2013 machine (compare this to e.g. Microsoft Office...).
  • At first, developing an OS was major task suitable for a close-knit group of specialists (like a company or a research group at an university). Today, the ubiquity of Internet has made it possible to create new OS-es in a variety of ways, there are systems that are run by companies (e.g. Microsoft Windows), purely by community (e.g. Debian GNU/Linux), even single persons (e.g. Slackware Linux) or anything in between these.
  • In early computers of the 50s and 60s, necessary software (including OS) came along with the computer - hardware was incompatible and most programs only ran on specific computers. With the advent of the PC-compatible class of computers (which means 'the' personal computer for many ordinary users, mass production of software took over. This also created a market for software - what was earlier a complementary tool (like a spare tire coming with a new car) became something that could be turned into a product. Since then, all proprietary systems - even those coming pre-installed on a new computer - have been paid for by the copy. Yet, many people still assume that "Windows came with the computer" - this is not the case, even if the price is hidden.

Security in operating systems

[edit]

Goals and threats

[edit]

Tanenbaum defines the overall goals of OS security as

  • data confidentiality - meaning that only authorized people should get access to it
  • data integrity - meaning that the data is valid, nobody has (accidentally or deliberately) tampered it
  • system availability - meaning that the OS must be robust enough to handle overloads and various conflicts
  • exclusion of outsiders - meaning that the control of the computer must remain in the hands of the original owner (tõday, many computers fail this point by being controlled by remote attackers).

Also an important point is authenticity - coming from a specific, trusted source. It can be counted under integrity (as Tanenbaum does), but is better to be brought out separately.

Threats for these goals are

  • interception - breach of confidentiality; data is accessed by unauthorized people.
  • modification - breach of integrity; data is changed in the process. It can be disrupted/destroyed (e.g. a Windows EXE file will not work anymore) or falsified (data remains available, but "tells an altered story").
  • interruption - breach of availability (mostly in systems). Data will be inaccessible due to the carrier (computer or comm systems) malfunction/overload. A good example is a DDoS (distributed denial of service) attack from a botnet.
  • fabrication - breach of authenticity. Basically, this means 'loss of warranty' - data can either be completely made up or changed in order to mislead its users (but some of it may be valid as well).
  • illegal exploitation - breach of exclusion of outsiders. This is an increasing goal for various malicious software - attackers gain access to large numbers of computers which typically keep functioning but will also be used for a wide range of shady activities (spreading malware, spam or scam schemes, carrying out DDoS attacks etc).

In the aspect of communication, we may distinguish two categories:

  • passive attacks - mostly means interception of messages (eavesdropping) - it can be direct interception, but also (in case of security measures used) indirect by capturing and analyzing network traffic in order to deduce the information. Passive attacks are hard to detect, so the most energy should go to prevention instead.
  • active attacks - modification or fabrication of traffic. These attacks are hard to prevent, so the focus should be on detection (and also damage control).

See also related topics:

Note: the full courses are referred to below.


PIBKAC again

[edit]

...but here, some of the "problems" know what they are doing. Threats from the human side can include

  • "monkey errors" by clueless users - they literally happen to "press the wrong key" (akin to a monkey getting behind a keyboard) and somehow hit a bug or vulnerability in the OS (users of early versions of Windows could get the "blue screen" in a lot of creative ways)
  • "oops errors" by curious casual users - happens when a user ends up where he/she is not supposed to be - e.g. in the home directory of an administrator. Typically, this kind of error implies some degree of laziness and/or incompetence from the system management.
  • bored insiders - from time to time, every large system would experience an attack from inside - while these can be relatively harmless (e.g. a student "testing his limits" on a lab server), they may be a part of a bigger threat (computers on the 'inside' can be hijacked, users can be bribed etc).
  • moneymakers - these attackers are typically determined and well-motivated. Methods may include social engineering, eavesdropping, physical theft and others. Extortion is a rising trend - the Cryptolocker trojan is a good example.
  • three letter organizations / cyberwar - most ordinary systems are powerless here. In fact, they may even be sanctioned against when too well protected.

Software threats

[edit]

All software has errors. OSs are the "capital ships" of software world and thus main targets of a variety of attacks. Malicious software (malware) has already a decent history - first widespread computer viruses appeared in the 80s. At first, they were just pranks or technical experiments, then became a form of cyber-hooliganism, later a profitable way to make money for various shady groups. Likewise, the first viruses were harmless or funny, then they turned into mayhem devices capable of destroying files and data, later the focus has shifted to parasite-like exploitation of target systems.

Malware includes

  • viruses - software that is capable of replicating itself by attaching copies of itself either to files or parts of data carriers (e.g. USB sticks)
  • ̽worms - software that is capable of independent spreading by using known vulnerabilities ("security holes") in software and networked systems.
  • ̈trojan horses - software that pretends to be something useful and tricks the user to launch something unpleasant instead.
  • ̈logic bombs - software with "payload" whose launch is triggered by some condition (e.g. Friday the 13th, reaching some amount of data on disk etc).
  • rootkits - software used to cover the tracks of an intruder. For example, a rootkit can lie to the user about available disk space as sudden fill-up of a disk could alarm the user.

Read moreː https://en.wikipedia.org/wiki/Malware

Some remarks

[edit]

In the very old days, computer were elite devices being accessible for and run by just a small number of high-level specialists. Data security as such was a non-issue as several factors contributed to it:

  • input and output devices were primitive - early computers typically had data entered via manual switches and displayed the results via indicator lights (later, printed output appeared). Thus specialized knowledge was needed to even understand the controls.
  • the workflow was distributed between computer engineers (the forefathers of todays' sysadmins), operators, programmers etc. Mostly only a few key people had access to the whole process.
  • software was incompatible - for years, all serious specialists wrote their own software tools. The complexity and personal nature of the work made "drive by" use by unauthorized people difficult.
  • hardware was incompatible - early computers were not connected to each other and were tailor-made specimens. Moving data was difficult, more so in secrecy.

In these settings, traditional methods of security - above all, limiting physical access by doors, locks and wardens - were enough. However, when computers became to be shared by several users, it made some kind of inner organization necessary - this led to the development of various mechanisms like file access rights, user groups with different privilege sets etc to prevent both accidental and deliberate tampering of user data by other users. Again, as computers were scarce, isolated and used by but a small number of people, these measures usually worked - and if they failed, the consequences were sorted out within the small community.

As computers became connected into networks, more measures were needed. Mostly starting with the appearance of Unix systems in early 70s, computers (and OSs) started to be compatible with others in varying degrees - this was beneficial in many ways, but also raised several points in security:

  • unauthorized data leaks over the networks became a reality
  • compatibility also worked to help seriously bugged or malicious software to spread. A good example of such a half-accidental malware case is the infamous Morris worm from 1988.

Interestingly, the appearance (and later dominance) of personal computers (PCs) initially diminished the risk of data spreading in unauthorized manner - as for a time, most PC-s were standalone, not connected to the network. On the other hand, the OS of the time - MS-DOS by Microsoft - was designed with exactly this situation ("one computer, one user", no networks) in mind. Later, when PC-s started to connect to networks (and especially Internet), the security shortcomings of the system became apparent.

MS Windows started out as a graphic shell (user interface) on top of DOS rather than a separate OS (Windows 3.0, 3.1, 3.11) and turned into a heterogenous union of the graphic shell and the underlying DOS (Windows 95, 98, ME) - while the latter appeared be complete operating systems, they in fact still had the two layers separate. Only the NT series (NT 3.1, 4.0, Windows 2000 and subsequent) were designed as full OSs with at least some considerations for the multiuser, networked environment (see Tanenbaum's book for more details - it is a strongly recommended reading for those with IT background, being one of the top sources on the internals of OSs).

[edit]

Study and blog

[edit]
  • Task A (less technical - suitable for students with non-technical background): compile a personal checklist for OS maintenance in your main computer (could also be "my next computer" list starting with the purchase of the computer and post-purchase activities like installing an antivirus etc). List daily/weekly/monthly etc activities (both one-time and recurring) and assess their importance/urgency.
  • Task B (more technical - for those finding Task A too simple or boring...): try out Kali Linux (http://www.kali.org). Test at least three security tools coming preinstalled on Kali Linux on a safe and legitimate target (e.g. a dedicated test computer, your old laptop), focusing especially on the operating system. Blog your experiences - try to be as specific as possible without disclosing sensitive information.


Back to the main course page