Tuesday, October 10, 2017

Upgrading your R libraries after OS upgrade

I recently posted about how to do an upgrade to Fedora 26 while maintaining the Nvidia drivers. So, what are you supposed to do with your R libraries you downloaded? If you try to use audit-explorer in RStudio, you'll get an error because system library versions have changed.

There are instructions on the web about how to do this. They basically say to run:

update.packages(ask=FALSE, checkBuilt = TRUE)


This basically works fine in some cases, but in our case the OS was upgraded and R can't tell that anything needs to be done because our R libraries were up to date before upgrading the OS.

We can run a script to remove and re-install the old libraries. The script works because the R libraries on the system were upgraded when you went to F26. All we need to do is rebuild the ones kept in your home dir.

The following script should be self explanatory.

# Ask R where it keeps its libraries
all <- .libPaths()

# Ask R where its system libraries are
site <- .Library.site

# Now subtract one from the other to get your home dir libraries
loc <- setdiff(all, site)

# Take a look at loc to ensure it only point to the R directory in your home dir
loc

# Ask R for the list of packages in your home dir
plist <- installed.packages(lib.loc = loc)
df <- as.data.frame(plist)
# Take a look at the packages to ensure this looks about right
#View(df)

# Get rid of the old ones
for (p in df$Package) { remove.packages(p) }

# Reinstall the packages
for (p in df$Package) { install.packages(p) }

When you run this, single step one line at a time. Do not run the whole thing. Single step it to the point where it outputs 'loc'. Does it look like a directory in your home dir? I get

"/home/sgrubb/R/x86_64-redhat-linux-gnu-library/3.4"

OK. Now single step down to the View(df) call. Uncomment that if you want. It shows a real nice table of all the package info.

Now its time for the scary part...deleting all the packages. Step over the first for loop. You will see a whole bunch of red text scroll in the Console pane. This is normal.

Now to put it all back, step over the last for loop. RStudio  will ask you if you want to restart R prior to installing. Tell it yes. It will ask again. This time cancel it by clicking on the X in the corner of the dialog. The dialog will pop up again and again. Click the X to close it. At some point it thinks it finished your for loop and it didn't. You can tell because you see an empty cursor ready to use in the console pane.

Fear not. Run the last for loop again. This time it will be unimpeded and will proceed to download and install all of your packages.

Whatever you do, do not exit RStudio until it after the second run of the for loop finishes. This can take 10 or 20 minutes depending on how many libraries you have. Exiting before the building finishes will surely lose the list of packages. You can tell its done because the Console pane is ready to use again.


Conclusion
When you upgrade the OS, sometimes R libraries won't work and upgrading doesn't work because you are on the latest version. The solution is running a script. It is not without danger, but it does the trick.

Tuesday, October 3, 2017

Upgrading to F26 while running nvidia drivers


Originally, I planned to post a much bigger article. I run with the rpmfusion-nonfree-updates-testing.repo enabled. I would not recommend that for most people. The reason being is that the released version of the nvidia drivers is 375.66. This is cuda 8. If you run with the testing repo you will get version 384.90. This one is cuda 9. That means redoing your whole environment. So, we'll save that for another blog post. Meanwhile let's go over how to do the upgrade.

Upgrading to F26
Upgrading to F26 from F25 was pretty smooth. I had to uninstall RStudio, but I already had the recipe to rebuild it. I followed the normal fedora upgrade instruction except I took one small deviation. If you use nvidia drivers, you probably have noticed that when you install a new kernel, akmods builds a couple new rpms and installs them on system shutdown. This way when you boot back up, the new kernel has matching drivers.

My concern was how does this work when upgrading the whole OS via dnf? What I did was:

  1. Let dnf download all the packages as normal.
  2. Reboot the system per dnf instructions so it can install them.
  3. After it had installed all the new F26 packages, I watched carefully for the grub boot menu to appear. I edited the first entry to add a 3 at the end. This causes it to boot into runlevel 3 which is non-graphical.
  4. Then I logged in, looked around to see how the upgrade went, and then did a reboot to go to try graphical mode. Doing a test boot into text mode was just in case it needed to build the rpms for the new F26 kernel during shutdown.

Sure enough, that is what happened. It started a shutdown job and built the new kernel modules and installed them. It came back up in graphical mode just fine.

In the near future, I write about switching to cuda 9. If you don't have to, I wouldn't for now.

Monday, October 2, 2017

Sometimes it takes two objects

Work is progressing on an upcoming release of the audit user space software. During the work to create text representations of the event, I found that some ideas just can't be adequately captured in the normalized view. For example, if admin mounted a disk drive all we could say is that a disk was mounted. But in truth, the admin mounted the disk to a specific place in the file system. How do we capture that? It is important.

After a while, I decided that sometimes there are simply two objects. The admin mounted this(1) to that(2). To address this the auparse library will assign fields to object2 (which is formally called primary2) whenever it sees the following:

1) Files are renamed by using the rename* syscalls
2) Files receive permission or ownership changes
3) Files get symlinked to
4) Disk partitions get mounted to a directory
5) Whenever uid or gid changes as a result of calling set*uid or set*gid syscalls

There may be other cases, so don't consider this the final specification. As I see more events, I'll add to this when necessary. Or if you have some ideas about when there might be a second object, leave a comment or email me.

Because this is a sparse column, it will not be enabled by default when the csv format is selected. To get it, you will need to pass --extra-obj2 to the ausearch program.

If however, you are a software developer, then you can get access to the normalized output via a new auparse_normalize_object_primary2 function. The way that it would be used in practice is similar to any of the other normalizer accessor functions. You would do something like this:


    rc = auparse_normalize_object_primary2(au);
    if (rc == 1) {
            const char *val;

            if (auparse_get_field_type(au) == AUPARSE_TYPE_ESCAPED_FILE)
                    val = auparse_interpret_realpath(au);
            else
                    val = auparse_interpret_field(au);
            printf("%s", val);
    }


This new function is not yet available unless you use the source code from github. This will be in the next release, audit-2.8, which should be out in the next week or two. Which reminds me...if you know of any issues in the audit code, now would be a good time to report it.

Tuesday, September 26, 2017

Some security updates in RHEL 7.4

RHEL 7.4 has been out for a little while now. And with Centos build 1708 recently released, there are a couple new security features that I would like to take a moment to highlight.

yama ptrace control
The first item is a new sysctl setting, kernel.yama.ptrace_scope. This is used to control ptracing of processes. If you allow ptracing of processes, this also allows processes or in memory data to be altered. This can be used to do something referred to as process hollowing or process injection. This means the process starts up but gets modified so that it doesn't do what its supposed to do.

To prevent this form of attack, we can use  kernel.yama.ptrace_scope to set who can ptrace. The different values have the following meaning:

# 0 - Default attach security permissions.
# 1 - Restricted attach. Only child processes plus normal permissions.
# 2 - Admin-only attach. Only executables with CAP_SYS_PTRACE.
# 3 - No attach. No process may call ptrace at all. Irrevocable until next boot.

You can temporarily set it like this:
echo 2 > /proc/sys/kernel/yama/ptrace_scope

To permanently set it, edit /etc/sysctl.conf. If you have a system in a DMZ or with sensitive information, I'd recommend a value of 3. I'd go with a 2 for production machines and a 1 for everyone else. Also while you are in there you might want to see if you are setting any of these other security related sysctls:

kernel.kptr_restrict = 1
kernel.dmesg_restrict = 1
kernel.perf_event_paranoid = 2
kernel.kexec_load_disabled = 1


jitter entropy source
RHEL 7.4 also picked up the jitter entropy source. This entropy source mines the natural jitter that exists in the execution of CPU instructions. This helps Linux a whole lot because typically Linux kernel are starved for entropy. There
is one catch ... some people in the upstream community think that jitter from the CPU leans towards being deterministic. So, they do not want to automatically stir it into the entropy pool. This means that you must run rngd to get the benefit of this new entropy source. Also, note that rngd only moves entropy from hardware generators to the kernel entropy pool. It in no way creates entropy.


audit events as text
If you read my blog then you know that there have been improvements for being able to understand what the events mean. You can take the ugly and nearly unreadable audit events and have them turned into english sentences. To do
this you just pass --format=text to the ausearch command.


proctitle added to audit events
The new RHEL 7.4 kernel now includes a proctitle record. The proctitle record gives the command line and its arguments for any syscall filter originating event. This is useful to see how commands were invoked in case the arguments are important to an investigation.

Friday, August 11, 2017

Updated Rstudio SRPM available which fixes build on Fedora 26

So, Fedora 26 is out. And with it comes a new openssl which is ABI incompatible with some programs. Turns out one of those is Rstudio. I presume the people at Rstudio are working on migrating to the new openssl. But in the meantime you may want to use Rstudio on F26.

Download here:
http://people.redhat.com/sgrubb/files/Rstudio/

If you are on Fedora 26, you will need to install the compat-openssl-devel package.

$ dnf install compat-openssl10-devel --allowerasing


This will delete openssl-devel, but you can re-install it later after Rstudio is built. If you are building it for the first time, there are some instructions here.

If you migrated from F25 you should update all your plugins from within Rstudio as recommended in a prior post.

Sunday, August 6, 2017

Super Resolution with Neural Enhance

[This article is rich in hyperlinks. I chose these to do a better job of explaining things than I can normally do. Please visit them.]

In the last blog posting, I talked about how to setup Theano on Fedora 25. Setting this up is pointless if you don't have a goal. There is a really cool application of Deep Learning that has on been published for about a year or two. Its called super resolution. Do you remember that scene in Blade Runner where Harrison Ford's character is analyzing a photo he found and zooms into the mirror to see around the corner? Well, we pretty much have that today. To get properly oriented on this topic, please watch this video:

https://www.youtube.com/watch?v=WovbLx8C0yA

OK. Are you interested in seeing something cool?


Neural Enhance
There is a project on github, neural-enhance, that houses some code that does super resolution. Log in to your AI account that was setup for theano. Then, grab yourself a copy of the code:

$ git clone https://github.com/alexjc/neural-enhance.git

Now, we need to install the dependencies for neural-enhance. It needs a couple things from Continuum Analytics. But neural enhance also calls out for a very specific check-in hash of the Lasagne framework. It appears to be a bug fix. (Just in case you are not familiar, Lasagne is a high level framework, similar to Keras, that you tell it what you want to make and how the layers are connected, and it makes it.) It would appear that Lasagne developers have not made a release in a long time hence the special version.

$ conda install pip pillow colorama
$ python3.6 -m pip install -q -r "git+https://github.com/Lasagne/Lasagne.git@61b1ad1#egg=Lasagne==0.2-dev"

OK. Neural Enhance has some pre-trained models that you can download to experiment with. Time to get some models.


$ cd neural-enhance
$ wget https://github.com/alexjc/neural-enhance/releases/download/v0.3/ne2x-photo-default-0.3.pkl.bz2
$ wget https://github.com/alexjc/neural-enhance/releases/download/v0.3/ne4x-photo-default-0.3.pkl.bz2
$ wget https://github.com/alexjc/neural-enhance/releases/download/v0.3/ne1x-photo-deblur-0.3.pkl.bz2
$ wget https://github.com/alexjc/neural-enhance/releases/download/v0.3/ne1x-photo-repair-0.3.pkl.bz2

We have everything...so let's try it out. Andrew Ng says that AI today is good at mapping A to B. (Specifically, see what he says at 4 minutes into the clip. This is such an amazing talk, it worth watching in its entirety.) Given data of type A, map it to B. I would like to test this using neural enhance. The program claims to have 3 capabilities: zooming for super resolution, deblurring, and denoising pictures. I would like to test the deblurring capability because that is the least subjective output. Given a blurry image, can it create output I can read?

To do this experiment, what I did, is get a screenshot of a malware article on "The Register". I loaded that into gimp and then made 3 pictures applying an 8, 12, and 16 pixel Gaussian blur. They look like this:


8x blur

12x

16x

As you can see, the 8x is not to hard to read. If you never saw the article, you could probably make out what its about. The 12x is nearly impossible. And the 16x is impossible. Can we decipher this with AI? Let's see...

To use the default model that comes with neural enhance, we would run it as follows. Note that when using the ai account, its not the account that I logged into my desktop with. So, I pass pictures between the accounts through the /tmp directory.

$ python3.6 enhance.py --type=photo --model=deblur --zoom=1  /tmp/screenshot-blurx8.png


On my system, this takes about 20 to 25 seconds to complete. I get the following picture:




Hmm...color me not impressed. Its better, but its not the jaw dropping wow that I was looking for. How about if we run the enhanced picture back through and enhance it a second time?




I'm still not impressed. And if its that fuzzy on 8x, then it has no hope of doing the 12x or 16x. At this point you may be wondering why I'm wasting your time and had you to go through all the trouble of setting up theano with the promise of something cool. I wondered, too. Then I realized that if you want something done right, you gotta do it yourself.


Training your own model
The default models that come with neural enhance are general models trained with all kinds of pictures. If we are trying to deblur text, would a model trained on dogs, cats, birds, trees, cars, and whatever really give the best results? Having 20 - 20 hind sight, I can say no.

So, in the neural-enhance project directory, there is a subdirectory called train. We will go into it and download a general network model and start training our own. In the train directory, I created subdirectories called text-samples and model-backup. The training process is two steps and I wanted to make a backup between runs - just in case. Regarding the text-samples, I made screenshots of 25 articles from 5 different web sites. I chose articles with no pictures to make the model tuned specifically for text. Another rule is that you should not put the text image that we are using judge the model with into the training samples. That would be cheating. OK, let's start...

$ cd train
$ mkdir model-backup
$ mkdir text-samples
$ cp /tmp/text-samples/*  text-samples/
$ wget https://github.com/alexjc/neural-doodle/releases/download/v0.0/vgg19_conv.pkl.bz2
$ ln -s ../enhance.py enhance.py
$
$ python3.6 enhance.py \
    --train "text-samples/*.png" --type photo --model unblur \
    --epochs=50 --batch-shape=240 --batch-size=12 --buffer-size=1200 \
    --device=gpu0 \
    --generator-downscale=2 --generator-upscale=2 \
    --generator-blocks=8 --generator-filters=128 --generator-residual=0 \
    --perceptual-layer=conv2_2 --smoothness-weight=1e7 \
    --adversary-weight=0.0 \
    --train-noise=10.0 --train-blur=4

I have a beefy GTX 1080 Ti. It took a little over 4 hours to run the pre-training. At first I was getting "unable to allocate memory" errors. After some research I found that the batch-size and buffer-size controlled how much memory was used. If you hit this even with these settings, lower the batch-size to 8 or 6 or 4 and see if that fixes it. The 1080 Ti has 11 Gb of memory, so if you only have 4Gb, then you need to drastically reduce it. You can use a utility from nvidia to see how much video memory is being used.

$ nvidia-smi -l 1

Hit control-C to exit it. OK...4 hours have passed and its done. What we just did was the pre-training. The pre-training helps the real training be more successful in picking out what it needs to. In this round, the generative model is being trained. The next round adds the adversarial model to the training. I'll talk more about that after we kick off the real training.

$ cp ne1x-photo-unblur-0.3.pkl.bz2 model-backup/
$ python3.6 enhance.py \
    --train "text-samples/*.png" --type photo --model unblur \
    --epochs=250 --batch-shape=240 --batch-size=12 --buffer-size=1200 \
    --device=gpu0 \
    --generator-downscale=2 --generator-upscale=2 \
    --generator-start=10 \
    --perceptual-layer=conv5_2 --smoothness-weight=5e3 \
    --adversarial-start=10 --adversary-weight=5e1 \
    --discriminator-start=0 --discriminator-size=48 \
    --train-noise=10.0 --train-blur=4

OK, while that is running let's talk about the strategy. The program uses a generative adversarial network. This is basically two models, a generator and a discriminator, that play a game. The generator learns from the training data how to generate something that is similar to the training data. The discriminator judges the quality of the work. So, its like an artist creating fake paintings that are close enough to fool the art critic. During training each side gets better and better at the role it has to play. The generator gets better at creating fakes based on feedback from the discriminator, and the discriminator gets better at spotting fakes. The two have to balance to be useful.

Most training runs can take 500 to 1000 epochs or more to complete. I don't have that much time. So, I settled for 250 as a way to balance how much time I want to devote to this experiment vs having a model good enough to see if the technique is working. During the training, my 1080 Ti took about 130 seconds per epoch. That works out to be about 9 hours of runtime.

OK. 9 hours has passed. So, how does the new model work? Judge for yourself...this is the converted images:

8x

12x

16x

For the first one, the text is pretty crisp. Much better than the default model. The second one you can see some noise starting to show up - but the text is easily readable. The real test is the final 16x image. It does have some noise in it. Perhaps more than 250 epochs of training would reduce that some more. Perhaps more text samples would help, too. But I have to say that it did an unbelievably good job of taking text that was so blurry that you could not read it and turn it into something so close that you can understand the article and guess what the mistakes were supposed to be.

The moral of this story is...do not depend on Gaussian blur as a way to obscure objects or text in a photo. All it takes is someone to come along with the right model and they can unmask the object.


Conclusion
In this article we've put Theano to use, learned how to train a model for super resolution, and saw that a general model is OK. But to get amazing results requires creating a tuned model for the exact job at hand. Neural enhance is also capable of zooming into pictures and augmenting the missing detail based on its models. The reader may want to experiment with this feature and create models useful for zooming or denoising. Have fun...

Saturday, August 5, 2017

Theano Deep Learning Framework on Fedora 25

 [This article is rich in hyperlinks. I chose these to do a better job of explaining things than I can normally do. Please visit them.]

A few articles ago we covered Torch 7 and how to set it up. There are several other frameworks that are important each having advantages in one area or another. Its important to have access to all of them because you never know when a killer app lands on any one of them. Today we will show how to setup theano. Theano is one of the older frameworks and takes a unique approach to GPU acceleration. When you run a program that uses GPU acceleration, it generates and compiles CUDA code based on what your program describes.


THEANO
In the last article about AI, I mentioned that you can setup an account specifically to run AI programs. This is because most of the frameworks install things to your home directory. Sometimes they want versions of things that clash with other frameworks. Sounds like a classic use case for containers. But I wanted to set this up on bare metal so let's dive in.

Theano is python based. It typically wants things that are newer than the system python libraries. So, I'll show you how to set all this up. If you want to create a new ai account, go ahead and do that and/or log in under the account you want to set this up in.

The first step is to download miniconda which is a scaled back version of anaconda which is a package installer used by Continuum Analytics. (There is some overlap in names with Anaconda the Fedora and Red Hat package installer. They are not the same.) They have lots of scientific computing packages ready to install. Look over this list to get a feel for it.

To install miniconda, do this:

wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh


Click through the license and accept the default locations.

Once it's done, source .bashrc to update your variables.

source ~/.bashrc

Now, let's install theano. First we need to tell it where our CUDA libraries are installed. If you need information on how to setup a CUDA development environment, see this blog post.

export CUDA_ROOT="/usr/local/cuda"
conda install theano pygpu python=3

This will download and install theano for python 3 and all its dependencies. Contiuum is shipping python 3.6 which is ahead of Fedora's python 3.5. Next create a .theanorc file in the homedir. In it, put this:

[cuda]
root = /usr/local/cuda

[nvcc]
flags = -std=c++11


This fixes the nvidia compiler to not choke on the gcc/glibc headers and in a more permanent way where to find the CUDA environment. It is also important at this point that you have fixed /usr/local/cuda/include/math_functions.h as I explained in the article about setting up your CUDA development environment. Theano is the one that chokes on that bad code.

Next, we should test the setup to see if it works. We will start with the bottom layer, pygpu. If this is not working, then something went wrong and nothing else will work. I took the following from this article: http://blog.mdda.net/oss/2015/07/07/nvidia-on-fedora-22. You don't have to make this a program. Just use the python shell.

$ python3
>>> import pygpu
>>> pygpu.init('cuda0')


If its working, you should see
<pygpu.gpuarray.GpuContext object at 0x7f1547e79550

Good. Let's exit.

>>> quit()


Now let's test theano itself. The idea here is to make sure it works with simple apps before you jump into a complex AI program and then find trouble. Let's make a program. Copy this into a file we'll call gpu_check.py in the homedir.


from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print('Looping %d times took' % iters, t1 - t0, 'seconds')
print('Result is', r)
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')


We will run 2 tests. One to check that the CPU is working and one to see that the GPU is working.


# THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=cpu python3 gpu_check.py
# THEANO_FLAGS=mode=FAST_RUN,floatX=float32,device=gpu python3 gpu_check.py


When you test the gpu, if you see an errors like:

miniconda3/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h(84): error: expected a "}"
/home/ai3/miniconda3/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h(446): error: identifier "NPY_NTYPES_ABI_COMPATIBLE" is undefined
...
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available  (error: cuda unavailable)
...
Used the cpu


This is normal the first time. You need to edit ~/miniconda3/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h.

On line 84, put the whole NPY_ATTR_DEPRECATE line in comments /* */ including the ending comma, save, and retest.

When you see:

Using gpu device 0: GeForce GTX 1080 Ti (CNMeM is disabled, cuDNN 5005)
...
Used the gpu


you are ready for theano...

Next blog post I'll show you something really cool that uses theano.