Stressapptest option trading
February — Present Donald H. Springfield, Illinois Area Experience: Inglewood, California Clerical Supervisor at St. Sourcing, Supply Chain, Cross-functional Team July — July October — Present. Dallas, Texas Real Estate Skills: The College of St.
Healthcare Management, Healthcare, Healthcare Information April — August Greater Philadelphia Area Financial Services. Louis Museums and Institutions Skills: Contemporary Art Museum St. Greater Los Angeles Area Research. Jacksonville, Florida Education Management Experience: Marketing and Advertising Skills: Currently there are If you re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.
The dbgsym packages are NOT provided by the usual Debian archive though which is good thing, since those packages are quite disk space consuming, e. Instead there s a new archive called debian-debug. Reading symbols from shasum So let s install the according debug package, which is coreutils-dbgsym in this case since the shasum binary which generated the core file is part of the coreutils package.
Then let s rerun the same gdb steps: No such file or directory. For even better debugging it s useful to have the according source code available. This is also just an apt-get source coreutils ; cd coreutils Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian! One package that isn t new but its tools are used by many of us is util-linux , providing many essential system utilities.
We have util-linux v2. There are many new options available and we also have a few new tools available. Tools that have been taken over from other packages last: DIO access backing file with direct-io lsblk list information about block devices: Debunk some Debian myths Debian has many years of history, about 25 years already. With such a long travel over the continous field of developing our Universal Operating System, some myths, false accusations and bad reputation has arisen.
Today I had the opportunity to discuss this topic, I was invited to give a Debian talk in the 11 Concurso Universitario de Software Libre , a Spanish contest for students to develop and dig a bit into free-libre open source software and hardware.
In this talk, I walked through some of the most common Debian myths, and I would like to summarize here some of them, with a short explanation of why I think they should be debunked.
Debian is old software Please, use testing or stable-backports. If you use Debian stable your system will in fact be stable and that means: Debian is slow We compile and build most of our packages with industry-standard compilers and options. I don t see a significant difference on how fast linux kernel or mysql run in a CentOS or in Debian.
Debian is difficult I already discussed about this issue back in Jan , Debian is a puzzle: Debian has no graphical environment This is, simply put, false. We have gnome, kde, xfce and more.
The basic Debian installer asks you what do you want at install time. Not all, but most of them. Besides, many package developers get paid to do their Debian job. Also, there are external companies which do indeed offer support for Debian see freexian for example. I don t trust Debian Why?
Did we do something to gain this status? If so, please let us know. You don t trust how we build or configure our packages? You don t trust how we work? Anyway, I m sorry, you have to trust someone if you want to use any kind of computer. Supervising every single bit of your computer isn t practical for you. Please trust us, we do our best. Many people use Debian. They even run Debian in the International Space Station. Do you count derivatives, such as Ubuntu? I believe this myth is just pointless, but some people out there really think nobody uses Debian.
Debian uses systemd Well, this is true. But you can run sysvinit if you want. I prefer and recommend systemd though: Debian is only for servers No. See myths 1, 2 and 4. My free software activities, February and March Looking into self-financing Before I begin, I should mention that I started tracking my time working on free software more systematically. I spend a lot of time on the computer, as regular readers of this blog might remember so I wanted to know exactly how much time was paid vs free work.
I was already using org-mode 's time clock system to keep track of my work hours, so I just extended this to my regular free software contributions, which also helps in writing those reports. So I started thinking about ways of financing this work. I created a Patreon page but I'm hesitant into launching such a campaign: So before starting such an effort, I'd like to get a feeling of what other people's experience with it are.
I know that joeyh is close to achieving his goals, but I can't compare with the guy that invented git-annex or debhelper, so I'm concerned I wouldn't be able to raise the same level of funding. So any advice you have, feel free to contact me in private or in the comments.
If you would be ready to fund my work, I'd love to know about it, obviously, but I guess I wouldn't get real numbers until I actually open up such a page Now, onto the regular report. Wallabako I spent a good chunk of time completing most of the things I had in mind for Wallabako , which I mentioned quickly in the previous report.
Wallabako is now much easier to installed, with clearer instructions, an easier to use configuration file, more reliable synchronization and read status propagation.
I've also looked at better integration with Koreader , the free software e-reader that forms the basis of the okreader free software distribution which has been able to port Debian to the Kobo e-readers, a project I am really excited about. This project has the potential of supporting Kobo readers beyond the lifetime that upstream grants it and removes a lot of proprietary software and spyware that ships with the Kobo readers.
So I have made a few contributions to okreader and also on koreader , the ebook reader okreader is based on. Stressant I rewrote stressant , my simple burn-in and stress-testing tool. After struggling in turn with Debirf , live-build , vmdebootstrap and even FAI , I just figured maybe it wasn't the best idea to try and reinvent that particular wheel: It turns out there's a well known, succesful and fairly complete recovery system called Grml.
It is a Debian Derivative , so all I needed to do was to stop procrastinating and actually write the actual stressant tool instead of just creating a distribution with a bunch of random tools shipped in. This allowed me to focus on which tools were the best to stress test different components. This selection ended up being: Stressant still needs to be shipped with grml for this transition to be complete.
I also need to figure out a way to automate starting stressant from a boot menu to automate deployments on a larger scale, although because I have little need for the feature at this moment in time, this will likely wait for a sponsor to show up for this to be implemented.
Still, stressant has useful features like the capability of sending logs by email using a fresh new implementation of the Python SMTPHandler BufferedSMTPHandler which waits for logging to complete before sending a single email. Another interesting piece of code in there is the NegateAction argparse handler that enables the use of "toggle flags" e.
I'm so happy with the code that I figure I could just share it here directly: I wonder why stuff like this is not in the standard library yet - maybe just because no one bothered yet?
It'd be great to get feedback of more experienced Pythonistas on this one. I hope that my work on Stressant is complete. I get zero funding for this work, and have little use for it myself: I manage only a few machines and such a tool really shines when you regularly put new hardware online, which is fortunately? I'd be happy, of course, to accompany organisations and people that wish to further develop and use such a tool.
Standard third party repositories After looking at improvements for the grml repository instructions , I realized there was no real "best practices" document on how to configure an Apt repository. Sure, there are tools like reprepro and others, but those hardly qualify as policy: While the larger problem of Unstrusted Debian packages remain generally unsolved e.
In other words, to solve the more general problem of insecure. This lead to the creation of standardized repository instructions that define: For example, a lot of repositories recommend this type of command to intialize the OpenPGP trust path: Some other repositories don't even bother teaching people about the proper way of adding those keys. Since pinning is so confusing, most people don't actually bother even configuring it and I have yet to see a single repo advise its users to configure those preferences, which are essential to limit what a repository can do.
To keep configuration simple, we recommend this: It is my hope that this design will get more traction in the years to come and become a de-facto standard that will be a key part in safely adding third party repositories.
There is obviously much more work to be done to improve security when installing untrusted. I'm still ambivalent on Signal: I've been following Signal for a while: Because I try to avoid Google proprietary software on my phone, it's basically the only way I could even install Signal. Unfortunately, the repository is out of date and introduces another point of trust in the distribution model: I've therefore started a discussion about how Signal could be distributed outside of the Google Play Store.
I'd like to think it's one of the things that led the Signal people to distribute an official copy of Signal outside of the playstore. After much struggling, I was able to upgrade to this official client and will be able to upgrade easily by just downloading the APK.
Do note that I ended up reinstalling and re-registering Signal, which unfortunately changed my secret keys. I do hope Signal enters F-Droid one day, but it could take a while because it still doesn't work without Google services and barely works with MicroG , the free software alternative to the Google services clients. Moxie also set a list of requirements like crash reporting and statistics that need to be implemented on F-Droid's side before he agrees to the deployment, so this could take a while.
I've also participated in the, ahem , discussion on the JWZ blog regarding a supposed vulnerability in Signal where it would leak previously unknown phone numbers to third parties. I reviewed the way the phone number is uploaded and, while it's possible to create a rainbow table of phone numbers which are hashed with a truncated SHA-1 checksum , I couldn't verify the claims of other participants in the thread.
I didn't work much in February so I had a lot of hours to catchup with, and was unfortunately unable to do so, partly because I was busy with other projects, and partly because my colleagues are doing a great job at resolving the most important issues.
So one my concerns this month was finding work. It seemed that all the hard packages were either taken e. I don't feel quite comfortable tackling the LTS branch of the Linux kernel yet. I spent quite a bit of time trying to figure out what was wrong with pcre3, only to realise the "32" in the report was not about the architecture, but about the character width. I still spent some time trying to reproduce the issues, which require a compiler with an AddressSanitizer , something that was introduced in both Clang and GCC after Wheezy was released, which makes reproducing this fairly complicated This allowed me to experiment more with Vagrant, however, and I have provided the Debian cloud team with a bit Vagrant box that was merged in shortly after, although it doesn't show up yet in the official list of Debian images.
That was one tricky bug as well, since it's not a security issue in apparmor per se, but more an issue with things that assume a certain behavior from apparmor. I have concluded that Wheezy was not affected because there are no assumptions of proper isolation there - which are provided only starting from LXC 1.
I also couldn't reproduce the issue on Jessie, but, as it turns out, the issue was sysvinit-specific, which is why I couldn't reproduce it under the default systemd configuration shipped with Jessie. I also looked at the various binutils security issues: I similiarly reviewed the mp3splt security issues specifically CVE and was fairly puzzled by that issue , which seems to be triggered only the same address sanitization extensions than PCRE, although there was some pretty wild interplay with debugging flags in there.
All in all, it seems we can't reproduce that issue in wheezy, but I do not feel confident enough in the results to push that issue aside for now. I finally uploaded the pending graphicsmagick issue DLA , a regression update to fix a crash that was introduced in the previous release DLA , mistakenly named DLA Hopefully that release should clear up some of the confusion and fix the regression.
I couldn't reproduce the issue in a local VM. After following the Ubuntu setup tutorial , as I wasn't too familiar with the Firebird database until now hint: I made an explicit list of the CAs that need to be removed after reviewing the Mozilla list. I have also done some "meta" work in starting a discussion about fixing the missing DLA links in the tracker , as you will notice all of the above links lead to nowhere.
Thanks to pabs , there are now some links but unfortunately there are about DLAs missing from the website. We also discussed ways to Debian bug , something which is currently a manual process. This is now in the hands of the excellent webmaster team. I have also filed a few missing security bugs Debian bug , Debian bug , partly because I wanted to help the security team. But it turned out that I felt the script needed some improvements, so I submitted a patch to improve the script so it is easier to run.
Other projects As usual, there's the usual mixed bags of chaos: But I ordered a slightly atypical non-gamer configuration: No need for NVidia binary crap drivers. Not available in Tuxedo Computer s online shop, but they nevertheless ordered it for me. To be more precise, it runs Debian Sid with sysvinit-core as init system and i3 as window manager. And since I found nothing, I did what Open Source guys usually do in such cases: I wrote it myself of course in Perl and called it systray-mdstat.
First I wondered about which build system would be most suitable for that task , but in the end I once again went with Dist:: Zilla for the upstream build system and hence dh-dist-zilla for the Debian packaging.
As of now, systray-mdstat is also available as package in Debian Unstable. It won t make it to Stretch as its first line of code has been written after the soft-freeze for Stretch was already in place. We are running openldap, the slapd daemon. And after searching the log files, the cause of the outage was obvious: Too many open files slapd: Too many open files [ I was blinded by this and ran to open a Debian bug agains openldap: The reply from Steve Langasek was clear: If people are hitting open file limits trying to open two extra files, disabling features in the codebase is not the correct solution.
Obvoursly, the problem was somewhere else. I started investigating about system limits, which seems to have 2 main components: I reviewed the default system-wide limits and they seemed Ok.
So, let s change the other limits. You can check current limits using the ulimit bash builtin. In the case of my slapd: The limit seems low for a rather busy service. But at some point, the slapd daemon starts to drop connections again. Thing start to turn weird here. The changes we made until now don t work, probably because when the slapd daemon is spawned at bootup by root, sysvinit in this case no pam mechanisms are triggered.
So, I was forced to learn a new thing: You can check the limits for a given process this way: If we search the internet to know how to change process limits, most of the docs points to a tool known as prlimit. According to the manpage, this is a tool to get and set process resource limits , which is just what I was looking for. According to the docs, the prlimit system call is supported since Linux 2. But yes, more problems. The prlimit tool is not included in the Debian Wheezy release.
A simple call to a single system call was not going to stop me now, so I searched more the web until I found this useful manpage: Nobody found this issue before? Crossgrading Debian in So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc 32 bit, bigendian , while I wanted ppc64 64 bit, bigendian.
If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot.
To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is I would have used jessie, but there is no jessie for ppc Be Prepared To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.
Time to Go Wild apt install dpkg: Manually installed the libraries, then tried again: Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are apt-get download dpkg: Automate That Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies.
Thanks to pbuilder , this wasn't too bad. With the aptitude resolver, it was then simple to upgrade a test package aptitude install coreutils: I did, and it replaced the package just fine. So I asked dpkg for a list of all powerpc packages installed since it's a ppc64 dpkg , it will report powerpc as foreign , massage that into shape with grep and sed, and give the result to aptitude as a command line. Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot.
After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use. There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with dpkg --force-overwrite -i Build a Debian package against Debian 8. In the current blog post, a tutorial on setting up a package build with OBS from Debian packages is presented.
Debian Stretch weekly netinst CD Enable experimental repository echo "deb http: The obs-api package will configure apache2 https webserver creating a dummy certificate for stretch to serve OBS webui. Ensure all OBS services are running backend services obsrun 0. Please check the journal logs to check if something went wrong or gets stuck.
Visit webUI to check hello package build state OBS logging to the journal Check in the journal logs everything went fine: Free software activities in September Here is my monthly update covering what I have been doing in the free software world previous month: Fixed a number of instances in the Django web development framework where methods had mutable defaults arguments such as lists or dictionaries.
Worked with Evgeni Golov to run autopkgtest and autodep8 tests after builds. This works around a longstanding "wontfix" upstream issue. Merged support from defivelo to support Django 1. Reproducible builds Whilst anyone can inspect the source code of free software for malicious flaws, most Linux distributions provide binary or "compiled" packages to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously and accidentally during this compilation process by promising identical binary packages are always generated from a given source.
I made the following improvements to our tools: Added a global Progress object to track the status of the comparison process allowing for graphical and machine-readable status indicators. I also blogged about this feature in more detail. Moved the global Config object to a more Pythonic "singleton" pattern and ensured that constraints are checked on every change. Display the "disordered" behaviour we intend to show on startup. Fix an issue where temporary files were being left on the filesystem and add a test to avoid similar issues in future.
Add a testcase for. Check the stripping process before comparing file attributes to make it less confusing on failure. Move to a lookup table for descriptions of stat 1 indices and use that for nicer failure messages. Don't uselessly test whether the inode number has changed. Run perlcritic across the codebase and adopt some of its prescriptions including explicitly using oct..
The event was covered by the Salzburg Cityguide. Filed an ITP for the roughtime secure time synchronisation client and server. This is blocked on packaging the Bazel build system. Experimental push on scalar is now forbidden fortunes-es: No such file or directory greylistd: Fails to install in testing witty: Drop the build dependency on hardening-wrapper ruby-em-hiredis: Could not find hiredis highlight. Issued DLA for jsch correcting a path traversal vulnerability. Issued DLA for unadf correcting a buffer underflow issue.
Issued DLA for dwarfutils working around an out-of-bounds read issue. Enhanced Brian May's find-work --unassigned switch to take an optional "except this user" argument. Marked matrixssl and inspircd as being unsupported in the current LTS version.
I also bumped the Debian package epoch as the "2: I additionaly backported this upload to Debian Jessie. I also backported this upload to Debian Jessie. Move to "minimal" debhelper style, making the build reproducible. I sponsored the upload of 5 packages from other developers: I also NMU 'd: RC bugs I filed 37 FTBFS bugs against csoundqt , cups-filters , dymo-cups-drivers , easytag , erlang-p1-oauth2 , erlang-p1-sqlite3 , erlang-p1-xmlrpc , erlang-redis-client , fso-datad , gnome-python-desktop , gnote , gstreamermm Why conntrackd in Debian is better with systemd There has been some discussion  around my decision to drop sysvinit support in the conntrackd package in Debian  version 1: The rationale I used for such a movement was sent to the debian-devel  mailing list, and here it is: Before reading the rest of this blogpost, please note that I'm not interested in the 'systemd vs sysvinit' war, and starting now I will focus mainly on the subject of building a firewall cluster with netfilter technology and the reasons why I think sysvinit here is irrelevant.
I started working with firewall clusters by the time of Debian Squeeze. By then, I used only sysvinit, because it was the Debian default init system and because I did not dive so much in the internals of the firewall cluster itself. By then, the conntrackd debian package included support for sysvinit by means of two files the two files that I dropped in 1: The conntrackd daemon is used in HA firewall clusters to replicate connection states between nodes of the cluster, so flow states are known in all nodes and they can properly perform stateful firewalling.
When you build HA clusters, there are 2 basic states of the cluster you may check and adjust: With conntrackd, the failover situation is straight forward: The failback situation is different: This is the case if you are building a multi-master firewall cluster i. Asking for a complete synchronisation is done by means of a new instance of conntrackd, which communicates using a UNIX socket to the main conntrackd daemon the one which is actually communicating with the other node.
A failback boot procedure may look like this: The point here is that steps 4 and 5 suffer a very bad race condition: As you could probably imagine, this is the worst possible scenario: Ironically, just after a failback operation, when all of our cluster is supposed to be up an running. To avoid the race conditions, some typical hacks could be implemented: The solution is elegant, simple and offers more direct benefits: Using sysvinit is prone to the described race condition.
No serious firewall cluster with this architecture would use sysvinit. It's obvious to me that systemd is a better technology than sysvinit. The sysvinit approach has been OK for a while, and for me it was fascinating when I started developing init scripts and learning how things worked.
But now we have systemd.