Computer Fluency Assessment

How do we assess an individual’s fluency with computers? We’re gonna go a little deeper than ability to use Microsoft Word…

There’s a point, in knowledge and skill in the field of computer science, when programming languages mostly start to look and feel similar. “Learning a new language” is more like learning the peculiarities of a dialect of some language you already speak, than switching from English to Russian. When laypeople ask if you know a certain programming language, one which you do not already know, it occurs to you that answering in the negative would not be entirely correct. You can probably code passably in that language with only an afternoon on Stack Overflow.

This point is computer fluency. How can we assess individuals to determine their computer fluency?

The complexity of computer systems is enabled by being constructed of layers upon layers – user-interactive software executing in a user-mode interpreter, executing on an operating system, running on a hardware architecture implemented by processor microcode, which is all built on transistors turning on and off. Then there are network stacks, user interface stacks, operating system API stacks, hardware interface stacks… These all enable the overall computer system. Fluency in each layer and field is a precursor to “computer fluency”.

Too many variants of each layer and field exist to be enumerated, though. It’s unreasonable, and unnecessary, for one individual to be fluent or even aware of all of them. Well before an individual becomes acquainted with the breadth of technologies at any layer, one can form a deep understanding of the layer. That’s because most of these technologies implement the same functionality.

At what point is it possible to prove that an individual has a deep understanding of a layer? When they are capable of constructing, analyzing, and evaluating that layer, they demonstrate mastery over it.

This is demonstration of the higher levels of cognitive activity in Bloom’s Taxonomy. Computer science degrees, and the professional requirements for computer scientists/engineers, focus on skills applied at all layers of a computer system. Operating system courses are a grueling and excellent example of this, as are microprocessor courses. These courses typically include assessments via laboratory assignment, where students are graded on their ability to create things that meet a set of requirements. Often the requirements must be gleaned by analysis of adjacent, existing system layers. During the span of both OS and microprocessor classes, students typically demonstrate the ability to create multiple layers of computer system. This contrasts with networking and individual programming language courses, where one class demonstrating even the breadth of one layer is rare. Typically these courses are geared more towards providing an introduction to a topic, so students can build upon that in further courses. The courses are also typically encountered earlier-on.

To assess an individual’s computer fluency, we must require them to create and analyze a diverse set of computer system layers. Further, individuals must create and analyze systems that are not already familiar to them.

Some of the layers individuals must be assessed in, then:

  • Analog circuit design and analysis
  • Logic gate-level design and analysis
  • Electronic communication design and analysis
  • Computer architecture design and analysis
  • Microprocessor-level programming and analysis
  • Operating system-level programming and analysis
  • Network design and analysis at the internetwork and transport layers
  • Network design and analysis at the application layer
  • Functional language programming and analysis
  • Object-oriented language programming and analysis
  • High-level programming language programming and analysis
  • Distributed, diverse language and architecture, system design and analysis

That’s a very large set of knowledge required for full computer fluency. It’s daunting.

I assert that at least basic fluency with each layer is necessary and sufficient for understanding of all deterministic functioning of a computer.

Assessing ability at each of these layers requires a diverse set of exercises which involve constructing and analyzing the layers. Also critical is that assessment be performed using languages individuals are not already familiar with. This is not an assessment of ability in any one programming language, for instance… Computer fluency is language agnostic, by definition, and therefore the tests must be also.

Which layers am I missing? I’m sure there are a few. A knowledge of C, or a C-like language is part of operating system-level programming and analysis, or microprocessor-level programming and analysis. Assembly-languages are there at the microprocessor layer… I call those skills out because professionals often lament their absence in many modern computer science curricula. However, an assessment of ability at the OS and microprocessor layers must be programming language agnostic. Artificial intelligence concepts are missing here, as are mentions of data structures and algorithms, although the latter two would be critical for many of the layers. AI is a layer on top of the computer, and separate. The consciousness running on the underlying body.

Just some thoughts here – I would love to develop a set of tests which comprehensively demonstrate computer fluency. Tell me which other skills would have to be demonstrated to accomplish that.

Writing About Writing Secure Shell Scripts

I recently read this cautionary tale about shell scripts. https://www.linuxjournal.com/content/writing-secure-shell-scripts

It’s cautionary in two ways: it is intended to cause shell programmers caution, and I caution against you taking the article too seriously.

One of the biggest threats to the shell in memory was the Shellshock vulnerability. This wasn’t typically a direct threat to shell scripts, but one caused by a bug in a shell, and by other programs exposing parts of the shell to external input, often in unexpected and unlikely places.

That kind of topic is not what the Linux Journal article is about.

Instead, it provides three cautions to people writing shell scripts:

  • Know the utilities you invoke: fully specify program paths so you don’t accidentally run something out of /tmp
  • Don’t store passwords in scripts: find your own other solution, good luck sucker!
  • Beware of invoking anything the user inputs: with examples that work on no modern Linux

These are all fine recommendations, but are made in terrible ways.

Know the Utilities You Invoke

The author provides an example where a user drops a malicious “ls” script in /tmp, then scripts execute it because they do not completely specify the path of ls. The author’s recommendation is to completely specify all paths. The problems with this are two-fold…

First, to execute a program in a directory that isn’t in your path, you must specify the path of that program. If it’s your current directory, that’s just a “./”, but it still must be present. If you have “.” in your path, as the article suggests, then you have made a configuration error that can easily cause you unexpected behavior at best, and the precise security vulnerability mentioned in the article at worst.

The correct remediation for this issue is to not put “.” in your path.

The second problem with the article’s recommendation is that many programs are not in a set location on the system, and some even move around over time (decades). Many claim that the “/bin”, “/usr/bin”, “/usr/local/bin” system is set, sensible, and regular. This is not the case in practice. Usage varies between distros, and even between versions of distros. A system update can move the location of a program in the worst case. This isn’t a problem if the new location is in your path, you’ll still execute the right thing. Unless you fully specified the old location in all your scripts.

This isn’t to say that you can’t usually specify the full location. And specifying the full path might provide a greater security guarantee. However, the author’s recommendation has some practical downsides, and anyway don’t put “.” in your path!

The great /usr merge makes fully specifying paths more practical.

Don’t Store Passwords in Scripts

This is 100% correct. Anyone with read access to your scripts or your git repo or your backups can pull those passwords right out. The problem with this section of the article is that it doesn’t provide any practical solutions. Anybody that has been doing shell scripting for a time will have run into the password problem, and lots of us have just crammed the password in the script or in a readable file at some point.

There are other better solutions!

First though, another wrong way to do it: specifying passwords on the command line or in environment variables. Don’t do this either! It often seems a better solution than dropping it in the code, but both solutions still expose passwords in many ways.

If your need is for SSH (or related) authentication, consider requiring the script user to type in the password. The password is a way to ensure the user actually has permission to use the resource you’re automating access to.

Often, scripts need to run without user interaction though, so entering a password is not feasible in those cases.

Then you need to limit the damage someone malicious can do if they gain access to the target system. Consider setting up a specific user on the target system that has permissions limited to only what is required for the automated actions. Setup key-based access to that target account, don’t specify a key file password, and don’t provide the target account an actual system password. Place the key file in a location only accessible to the users that should be able to execute the script (maybe you need a custom user or group here, too) and give the file only the minimum required permissions. Now, in your script, invoke SSH, or SCP, etc. with that key file specified.

Using SSH-Agent is probably an even better solution: https://www.akadia.com/services/ssh_agent.html

What if it’s not SSH that you’re sending a password to, what if it’s sudo? I think we’ve all wanted to do this at some point, I confess that I have.

Don’t do it though!

Change your sudo configuration so that for the specific command you need a specific user to run, it doesn’t require a password! Then lock access to that user down some. Even in this case, you’re probably causing problems you don’t expect. Imagine if the target executable has a vulnerability… Now you’re opening the aperture for privilege escalation. A better solution is to change permissions to allow specific users access to the specific resources required, then reduce others’ access to those user accounts.

What if you need to script password sending to telnet or ftp? I don’t have great solutions at hand for you. At some point you may need to put passwords and usernames in files, then limit access to those files, and have your script grab the file contents.

Beware of Invoking Anything the User Inputs

The author suggests that a specific setup and command line can cause, “quite dire consequences.” Try this command for yourself though, after changing the “rm” command to something benign like a version of “ls”.

You didn’t get the output the author suggested you might? That’s because this isn’t really a problem the way he states it.

Oh, the author’s “eval” version might work, but when have you ever written code like that? Please say you haven’t.

It’s definitely possible to accidentally provide ways for users to cause command injection – it’s a very common class of vulnerabilities. Scrubbing the input, as the article author suggests, is a common solution.

One recommendation the author should have made more clear is that when scrubbing input you should permit only data that fits a minimalist whitelist. Don’t try to look for bad characters, like you would with a blacklist… Instead, fail on any characters that aren’t in your specific small set.

More often though, the use case for shell scripts is in accepting input from users already on your system. In this case, the recommendations in the article are lacking.

My main recommendation is to avoid permitting users to run anything as a higher level user, or as a different user at all. If users can only execute scripts as themselves, and the script doesn’t grab extra permissions like via SSH, any commands the user might try to get the script to run would also run with the user’s permissions. Therefore, it’s a thing they could’ve done themselves.

If there are shared resources the script needs that the user shouldn’t normally have, perhaps use sudo within the script to grab/use those resources then drop permissions (as would happen if a separate program did the permissioned work, then ended). Alternatively, create another user with only minimal permissions, including those shared resources, then have the script run as that user.

If the script does have to get extra permissions, like through SSH, avoid sending user input there, or sanitize it with a whitelist (worst case). When I say SSH provides extra permissions, I mean that it does so by providing access to another computer, and potentially another user.

You should try to avoid writing code that eval’s user input, or potentially places it on the command line as multiple arguments, and you should try to sanitize input when you have to take a risk… But the best solution is to design the environment so that even when you inevitably mess up it’s not that big a deal.

Conclusion

Good points Dave, but the recommendations are lacking.

The Apple article he links has better recommendations, and solid examples. It points out that older Bash versions were vulnerable to one of Dave’s examples that I poo-pooed. However, there are many problems with running older Bash versions (you wondered why I mentioned Shellshock?)… Don’t do that anymore. The Apple example also has a great hidden eval command in there – echoing user input to a file then executing the file is essentially eval. Avoid.

Introducing the Hitchcock

Some trucks have testicles, mine has a Hitchcock.

The Hitchcock on the truck.

The Hitchcock on the truck.

Yes, that’s supposed to be the outline of Alfred Hitchcock.  It looks a lot like Harambe though…

Hitchcock in profile

If you want to make one, check out the Thingiverse thing here.  Download all files, open the zip up, and under “files” you’ll find an “stl” file.  Hop on over to https://www.3dhubs.com/, click “start manufacturing”, and upload that “stl” file.  I used regular PLA plastic and printed in white to try to avoid any discoloration due to sunlight inevitably breaking this thing down.  If you want a recommendation for a specific hub to use over there, these guys printed this Hitchcock, and I’m sure they’ll do a great job for you too.

For the next go with this, I’ll add some definition to the print, and maybe change the way the right side and bottom are laid out.  Having something that looks like Harambe is also great, but it might be a clearer joke to everyone else in traffic if it’s clearly Alfred Hitchcock staring back at them.

Confidentiality and Integrity vs Availability

In computer security, there are three main axes for consideration – confidentiality, integrity, and availability (CIA).  These are commonly thought of as things you desire out of a secure system.  You want your communications to only be available to the intended agents, you want them to remain unchanged except when you intend them to change, and you want them to be available when you need them.  Preferably, you want your communications to have all of those properties.

Organizations make trade offs between CIA daily, and in the real world increasing one necessarily decreases another.

This fact is obvious to most network security practitioners – but only considered in theory.  In practice, organizations trade off CIA each time they make a decision about their information systems.

This fact is not obvious to an organization’s decision makers – management.

Most computer security decisions seek to increase confidentiality and integrity without considering the costs to availability.

This is visible primarily in the standard Information Technology world, as contrasted to the SCADA/ICS Operational Technology world where most recognize availability as king.

To demonstrate C&I vs A, consider the way we “improve the security” of our information systems in the traditional IT world.  Improvements generally focus on the C&I, and they do so by adding defense in depth, or defense in breadth.

Depth

Defense in depth, first.  Add a security measure to your IT system that is of a type not currently available – add a firewall, an IDS or IPS, include SSL introspection, endpoint virus scanners, host-based firewalls, host-based anomaly detection…  There are always new ways to add defense in depth.

These are methods that directly add fragility to your IT system!  This type of fragility is similar to a child’s model bridge that is supported by one popsicle stick at each end, perhaps leaning against a wall.  Popsicle sticks snap, breaking the bridge, in the same way that network infrastructure has the potential to disrupt your network.

Security infrastructure components have a non-zero probability of causing a problem with part of your organization’s mission.

When you add defense in depth, you are gluing a popsicle stick to the bottom of the bridge’s current supports.  You’re making those bridge supports longer by one popsicle stick.  Now the chances of the bridge breaking have increased – the older stick can snap, or the newer, or both.  The probability for the older stick snapping and the bridge breaking remains the same, but you’ve added in a new possibility with the new stick.

In network security a popsicle stick snap looks like a device blocking legitimate traffic (false-positives), a device providing a new attack surface area for adversaries, a device adding just enough to the packet TTL that delivery fails, or a device delaying traffic long enough to disrupt communications at higher levels in the network model.

Adding defense in depth can provide benefit to confidentiality and integrity, but increases network fragility.  An increase in network fragility necessarily reduces network availability.  The reduction in availability may be difficult to detect, and most of the time has negligible effect.  However – networks are in place for months and years, and over that time the availability reductions become noticeable.

Breadth

Defense in breadth is again like gluing an additional popsicle stick to the end of the existing bridge supports!  It is not similar to taping a second popsicle stick to the existing ones, nor is it similar to adding another connection point to the bridge for a popsicle stick.

Defense in breadth is adding a second version of an existing defensive measure with the idea that, if the first didn’t catch a bad guy maybe the second will, and we’ll still have caught the bad guy.  Increasing defense in breadth by adding measures to the system has all the same negatives as increasing defense in depth, all of the same outcomes, and increases fragility of the system in the same way.

Adding network redundancies can improve availability, but stretches manpower, limiting the benefits of redundancy to availability.

Practice

In practice, manpower is limited.  This is as true in the field of network security as it is in every other field.  The limited manpower that is monitoring your networks today is having some amount of success – you are able to use your networks with some level of CI&A.  By increasing the number of defensive measures those employees must maintain, and increasing the redundancy measures they must maintain, you are requiring more of those limited manpower resources.  This is all in addition to the growing size of other parts of your enterprise and the growing network requirements of our information age.

There is, of course, a rate at which you can increase C&I defenses, and increase network redundancy, and increase manpower, and remain in front of your decreasing network availability.  However, increases in organizational size increase complexity and decrease your return on investment.

Implementing new confidentiality or integrity measures at a constant rate requires exponential increases in availability investment.

This is unsustainable!  Not to mention the fact that every organization has monetary resources and management motivation that will give out long before availability is substantially improved.

Of course – this type of problem has been common in history.  Automation technology and process improvements (defensive/operational tactics) usually save the day, and they can here too, but they are one more thing that will add complexity and fragility to your network, thus reducing availability.  The measures become self-limiting in the same way as before.

Others have discussed the trade offs in C&I vs A – however the idea that C&I trade off vs A is mostly found in discussions of availability’s more human needs.  For example, the idea that increasing C or I by implementing password restrictions makes it more likely that a bank manager will forget their password, and therefore will be unable to run the bank, decreasing availability.  Another paper discusses how confidentiality decreases as redundancy measures (availability) increases, due to increased attack surface area.  However, I have seen very little discussion of a decrease in availability of the underlying technical network consequent from implementing new confidentiality and integrity measures.  I have seen ample evidence of this, though, especially in the business networks I a most familiar with.

Managers of the networks I’m familiar with regularly mandate new security measures, nearly always designed to improve confidentiality or integrity.  Availability issues are always cropping up, and network complexity is rarely considered as a possible cause.  Network complexity is almost never fingered as a problem worth fixing, and increased manpower is always the solution.  Increased manpower is also rarely politically practical.

Lastly – this thought is related to the idea of byzantine failures causing catastrophe within a complex system.  Availability issues are simply the small catastrophes leading up to the major one.

What to do, what to do…

  • We must remember that measures designed to increase confidentiality and integrity necessarily decrease availability.
  • We must be bold enough to consider eliminating confidentiality and integrity measures.

Most of all, though,

We must always balance the risks related to confidentiality and integrity vs the risks to availability.