Deploying Secure Containers for Training and Development


Admin: make something that is really easy to install, deploy, and configureUser



Download 4.47 Mb.
View original pdf
Page3/5
Date12.11.2022
Size4.47 Mb.
#59941
1   2   3   4   5
Deploying Secure Containers for Training and Development
Admin:
make something that is really easy to install, deploy, and configure
User:
design an intuitive interface and smooth training experience with minimal requirements.
Common training problems have been addressed as well as an over- view of training types. I will now offer a solution that tackles a few of aforementioned problems. Something that is easy to install, deploy, and configure for the administrator and easy to use for the student is needed.
Because developing training material and setting up events is an enor- mous amount of work, taking days, weeks, and more, it would be ideal to quickly and effectively deploy training environments and leave most of one
’s work focused on preparation and content. From the perspective of the student, another goal is to design an intuitive interface that provides a smooth training experience for the user with minimal requirements.
Isolated, Scalable, and Lightweight
Environment for Training
A container system for teaching Linux based software with minimal participation effort. The participation barrier is set very low, students only need an SSH client.
ISLET (Isolated Scalable and Lightweight Environment for
Training) is a tool that I wrote that utilizes Docker to quickly provide various training environments that can be used for training events. It streamlines the process and is FOSS (Free and Open Source Software)
available on Github.
16
Deploying Secure Containers for Training and Development


How does ISLET address the criticisms? To begin, the participation barrier is set very low, all one needs is a remote access tool such as an
SSH client. OpenSSH is a cross platform tool that is available on desk- tops, servers, smart phones, tablets, and supported by many operating systems. The student is therefore not banished to a workstation that a hypervisor depends upon and as a result the choice of hardware is that much greater. Next, recall that shared system training had an issue managing user accounts. Having to distribute usernames and passwords for students can be a pain for the trainers. ISLET allows users to create and manage their own accounts. Only one account is shared and that is to allow the users to remotely access the server running ISLET. This is often something that can be displayed to all students like an SSID and wireless password. Every student will use it to connect to the system and be placed in the ISLET software where they will create their own
ISLET account, and be able to immediately gain access to a training environment. From the user
’s point of view it takes only a handful of seconds or two to end up in a training environment ready to perform work. From the perspective of the trainer only a single account on the host needs to be created. In sum, most of the work regarding account management is eliminated. Continuing, updating training materials can easily be done by mounting a directory containing the materials from the host into the containers. Updating or correcting material in the host directory makes them immediately available to the users.
Moving along, we
’ll address the issue of waiting for virtual machines to boot and having to configure them. ISLET tells Docker to create con- tainers out of prepacked Docker images that consist of the software the students will be instructed on. This can be anything from the GCC
(GNU C Compiler) to an IDS (Intrusion Detection System). It happens to be that just about whatever you can install and run on a GNU/Linux system you can package into a Docker image. From the ISLET menu the user can make a selection regarding the training environment they want to enter. The trainer can provide multiple options if they wish.
There may be different environments each with different software for dif- ferent topics instead of using one large image containing all the tools.
Once a training environment is selected a user can perform work from it in less than a second, it
’s instantaneous. Being placed in a training envi- ronment can happen in less than 30 seconds from the initial connection to the ISLET system. It
’s harder to feel impatient with that kind of time,
the student has more time to get focused and not worry about whether
17
Using Containers in Training

his virtual machine is going boot up. In addition, ISLET is configuration-less for students; the configuration is provided by the trainer. This reduces the possibility of some students falling behind others due to technical difficulties, rather everyone is using the same con- figuration on the server. Less stress and frustration is experienced as a result which improves the overall training experience for the student.
The next issue to address is common of Web based training. The software environment provided is often limited or ad-hoc for specific tasks. Instead of providing a simplified interpreter for training ISLET
is intended to give users the power and flexibility of a full file system with the standard unix toolset where the student works from a com- mand interpreter such as Bash. Students not only can write code but explore the directory structure, system documentation, and take advan- tage of the powerful tools available on a system that can be used in con- junction with and to enhance the software to learn. ISLET excels at command-line base training.
Finally, a source for a feedback loop is available in the Docker
Engine which ISLET uses. A log of standard output and error which includes the commands the students executed are available for review.
Trainers can spot mistakes made by student and can build on that to improve their own training by incorporating that information back into the curriculum. For example, if a number of students make the same mistake in their code or while executing a command it might be avoided next time by a better explanation or improved instruction. The instructor may not have explained something as well as they could have and that
’s why users were making mistakes.
Real World Use Cases:
• Launched the precursor at BroCon 14
Used to teach Bro scripting.
50 users had shell access to Bro and unix tools in a container simultaneously on a lower end (m3.xlarge) EC2 VM
- no problem.
• University of Illinois Digital Forensics 2 course
FlowCon Bro training
100 users
• Used to teach Linux tools at UIUC LUG meetings
18
Deploying Secure Containers for Training and Development


The precursor to ISLET named BroLive! also known as Bro sand- box was launched at BroCon14. It was a beta release intended to dem- onstrate a better way to train at conferences. It was developed out of a need to address some of the issues we mentioned earlier that occurred at earlier Bro events. BroLive! was used to teach the Bro scripting lan- guage and to analyze Bro logs with standard unix toolset e.g. grep,
awk, sort, etc.. We had roughly 50 users simultaneously working with
Bro in a container that was launched by the tool. The container had
Bro installed along with various command line tools that were used analyze the output of Bro. The machine was hosted on a lower end
EC2 virtual machine on Amazon
’s AWS infrastructure. It went pretty well for its first run, and addressed the problems it was meant to solve.
Colleagues began to see value in the tool and became more cognizant of the state of IT software training. I set out to make it more software neutral and rewrote much of it in my free time as well as used it to train others. It was later renamed ISLET which was coined by Adam Slagell of the National Center for Supercomputing Applications.
It has since been used in various settings, notably at the University of Illinois at Urbana-Champaign, the Digital Forensics 2 course used
ISLET to train students using computer forensics tools such as
Volatility, Sleuthkit, and Autopsy. ISLET has been used to teach Bro training at events like Flowcon and Derbycon. It has been said to have handled training for more than a hundred users at once. I
’ve used it to teach various pieces of software at the GNU/Linux User Group and
OpenNSM at UIUC.
Feedback loop
Container logs show user’s actions e.g. mistakes which can be used to improve future training
I conducted research and worked on a paper that evaluated ISLET
against a number of metrics to see how well it would perform in the real world. Jeanette Dopheide, Adam Slagell, and myself all of
Cybersecurity Directorate at the National Center for Supercomputing
Application worked on the paper which was eventually published at the
XSEDE (eXtreme Science and Engineering Discovery Environment)
conference in 2015. We used the Bro network security monitor as the tool for training. We ran Bro against a network trace file that
’s com- monly used at Bro training events which produced many different
19
Using Containers in Training

protocol logs. Note that the Bro process performs a lot of work, it
’s both CPU intensive and requires a lot of memory to keep state of pro- tocols such as TCP. It attaches various analyzers to the connections to decode the network traffic up to the application layer and produces readable logs of the results.
Container Startup
0.4 0.35 0.3
Doc k
er Container Star tup
Time
0.25 0.2 0.15 0
10 20 30 40
Number of Independent Trials (running uptime command)
50 60 70 80 90 100 0.45
Seconds
We performed a number of experiments in our evaluation of
Docker Engine and the ISLET training software. First, we were curi- ous about container startup time. For example on the CoreOS website it stated that containers start up in milliseconds and we did not see that claim substantiated so we tested it ourselves. We did a number of independent trials, 100 containers each running the uptime command and exiting. What we found was that it took
B400 milliseconds to cre- ate the container, execute the uptime program, and then exit the con- tainer. That
’s how fast the start-up time was for the containers in our system. Container creation time is very fast and most of the time is actually spent running the application.
20
Deploying Secure Containers for Training and Development


Container Concurrency
12 10
System Load
8 6
4 2
0 0
100 200 300
Number of Running Containers (running top)
400 500 600 700 800
Load Average (1m)
Load Average (5m)
Load Average (15m)
900 1000
We also test the container concurrency. We talk about density earlier
I.e. how many containers can be running on a host. We were easily able to execute 1000 containers each running the program top at the same time on a 16 core host without saturating the system. These containers are all running concurrently. Imagine having the overhead of a thousand virtual machine each running top. It would be much harder to scale that on a single host of the same size. Glancing at the load averages on the graph System Load vs. Number of Running Containers you can see that this scales very well until we run more than 700 containers at once which causes a large spike in load averages. This can be investigated further to find the cause.
21
Using Containers in Training


Simulate Training
• Metric:
Response/execution time of program to train
Standard time:
Average of 2.13 seconds
Cutoff point:
6 seconds (too long to wait)
• We used the Bro network security monitor as the test.
Processed a network trace file used for training at Bro events. Bro is a process that does a lot of work.
• Introduced small randomization delays for pauses &
common commands for environment - looped 20 times. Ends up simulating
10 minutes of high user activity
Our next experiment was to perform a simulated training test with the goal of finding how many users can train simultaneously using Bro.
Our metric is the execution time it takes for Bro to process through the network trace file. The execution time of the software to be trained on measures how well our system is performing during training. If the exe- cution time increases in time to a point where it affects the users ability to train such as growing impatience we need to allocate more resources to the computer. We ran Bro 100 times through the network trace file,
the average execution time was 2.13 seconds. We decided that 6 sec- onds, 3 times the average amount, was our cutoff point i.e. the point from which we need to add more resources or not allow more users. To find the cutoff point on our system we simulated common commands used in a Bro training session. Tasks like generating and analyzing Bro logs were performed. 10 minutes of simulated but active user activity was created. We simulated an overly active user, one who is not repre- sentative of the typical class; this stressed the software. For example at a training event a user performs instructions that the trainer provides.
There is a delay from between listening to instructions and then carry- ing them out. This means there will be pauses between tasks such as lis- tening to an explanation. We simulate a few seconds of pauses between commands but not that would be representative of a class with a lecture because that
’s difficult to measure e.g. length of pauses during speech,
and we did not have any available data for past conferences.
22
Deploying Secure Containers for Training and Development


Simulation Data
30 25 20 15 10
Bro PCA Processing
Time (in seconds)
5 0
0 20 40 60 80 100
Number of Simulated Users
120 140
Processing Time
System Load (1 min avg)
Processing Cutoff Point (6 sec)
160 180 200
We plotted the result in the provided graph. The dotted line on x axis is the processing cut off time. The solid line represents the execu- tion time per simulated user
’s actively working on the system. As more users are training the execution time increases because the system has to run multiple simulations concurrently. If the execution time inter- sects and rises above the cutoff point value for that simulation the sys- tem needs more resources because it took 6 seconds or more to run
Bro. The execution time line ascends completely (including valleys)
above the cutoff point when we are at 150 users. This tells us that we can train a few less than 150 users on our host comprised of 16 CPU
cores and 32GB of RAM without a negligible impact.
23
Using Containers in Training


Cost Comparison
0.6 0.5 0.4 0.3 0.2 0.1

Download 4.47 Mb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page