View on GitHub

Quorten Blog 1

First blog for all Quorten's blog-like writings

Okay, so I’ve stated earlier that when I feel like I’m reading some well-organized reference documentation, I don’t feel inclined to take notes. But, alas, now I feel that if I don’t take notes, my notes will not be an accurate reflection of the things I learn and the things I do, so here I am taking notes on Perl, even though I don’t feel inclined to do so.

You can assign subroutines to variables in Perl. Suppose you do so, and then you want to execute the subroutine in that variable. How do you do that? Use this syntax.

&{\&{$action}}();

20190104/DuckDuckGo perl execute function defined as variable
20190104/https://stackoverflow.com/questions/1915616/how-can-i-elegantly-call-a-perl-subroutine-whose-name-is-held-in-a-variable

Suppose you call a command using the backticks syntax to capture its standard output, but you also want to get its exit status. How do you do that? Simple, just use the $? variable. Note that you may need to use $? >> 8 to get the true exit status.

20190104/DuckDuckGo perl backticks exit code
20190104/https://stackoverflow.com/questions/7819197/getting-the-return-value-of-a-command-executed-using-backticks-in-perl

Read on →

I was thinking about that GitHub user profile picture where I saw a QR barcode that contained text like “day 39 they still think I’m a barcode.” Is there any larger scale meaning and sources behind that message? Apparently not.

Failed search.

20190117/DuckDuckGo day 39 they still think I’m a barcode

On the other hand, I found this interesting review of different print-on-demand services.

20190117/https://medium.com/@mwichary/my-experiences-printing-a-small-batch-of-books-c04141b63dfe

Also, this was part of an interesting project that the author was working on, a book about a party where friends were invited to read parts of books that have changed their lives. Party Where We Read Things. There was a copyright inquiry that went back and forth with one of the print-on-demand providers, since the book necessarily included quotes and passages from other copyrighted works.

20190117/https://medium.com/@mwichary/party-where-we-read-things-c503c3ec624c

Ha! Very interesting. Yelp trained an artificial neural network to eliminate all bugs in the app, and it ended up deleting everything. They had to roll back.

“Can’t have any bugs… if you don’t have any code”

Of course, because often times, the solution to a bug may be deleting extra code. Taken to its full end, you simply delete all code.

“There was another case when the neural net was playing Tetris and in order to avoid losing, it was putting the game on pause.”

20190117/https://twitter.com/bendhalpern/status/1085872254062342145

Do you want to setup a nature camera, a surveillance camera, or some other camera, but you are worring about writing 24/7 video data to an SSD or SD card will wear it out? Sure, you can try using larger SSD storage to delay the onset of failure, but will this really be adequate? Indeed, it will be.

20190117/DuckDuckGo how long does solid state storage last for security camera
20190117/http://www.tomshardware.com/forum/281343-32-cameras-system

Here are some calculations to back this up.

Suppose writing video data for 30 minutes consumes 1.4 GB of storage.

1.4 GB/30 min * 2 * 24 = 67.2 GB/day
67.2 GB/day * 7 = 470.4 GB/week

So, how long will your SSD last at a write durability of 100,000 cycles?

100,000 / 365.25 rewrites/year = 273.785 years
100,000 / (365.25 / 7) rewrites/year = 1916.496 years

So, if you can store up a full week’s worth of footage on a solid-state footage, you have no need to worry about wearing it out too quickly. That being said, you definitely want to go for the larger storage if you can. It’s far easier to keep administrative decisions for keep/delete/archive data to be fully completed at the end of a week, although some initial data entry may be done throughout the week.

Read on →

Wow, now here I am, I’ve come full circle… I’m blogging about GitHub blog on my GitHub pages site. (Well, at least for now, until I end up migrating to something else in the future.)

Filament, yet another raytracing rendering engine? Yes, but this time, their main advantage they’re touting for their new code base is Android support. So, of course it’s made by Google.

20190116/https://blog.github.com/2018-12-21-release-radar-november-2018/
20190116/https://github.com/google/filament
20190116/https://developers.google.com/ar/develop/java/sceneform/

Now, this is an interesting addition. Download all your GitHub data. One, yes they’re making changes to comply with GDPR. Two, they’re recognizing that people want to move off of GitHub to other services and take their projects with them, so they’re making all of that project data available in a machine-readable format.

20190116/https://blog.github.com/2018-12-19-download-your-data/

Interesting, this is GitHub’s status page… well, they’re new status page. They’ll be phasing out the old status page.

20190116/https://blog.github.com/2018-12-11-introducing-the-new-github-status-site/
20190116/https://www.githubstatus.com/

Read on →

GPL Cooperation Commitment

2019-01-16

Categories: legal   license  
Tags: legal   license  

Interesting, the GPL cooperation commitment. This was started by Red Hat. Why? Historically, it was considered a best practice in GPLv2 enforcement to take a cooperative approach when infringers were found. When they were first notified, the goal was to make it clear to help them come into compliance, not to sue them for money. Apparently, and unfortunately, there have been some cases where this was not the approach. Also, license reinstatement is made unambiguously clear in GPLv2. The goal of hte GPL cooperation committment is to unambiguously state that companies and software developers stand by the historic best practice for GPL enforcement, so as to add more predictability to how it is enforced.

20190116/https://blog.github.com/2018-11-07-github-joins-GPL-Cooperation-Commitment/
20190116/https://gplcc.github.io/gplcc/

What does an interplanetary spaceport look like? Alas, searching for that doesn’t have very much useful in the way of the results. Okay, let’s try another take on the subject. Let’s look at the biggest international mass passenger transit systems operating in the world today: airports. Naturally, an interplanetary spaceport geared at carrying mainly human passengers would look quite similar to a scaled-up airport.

And the result? Well, the biggest airport in the world, Hartsfield–Jackson Atlanta International Airport, looks quite uninteresting from the aerial view. Mainly, it’s open space and runways. It’s inside the smaller buildings where all the passenger logistical routing magic happens.

20190114/DuckDuckGo world’s largest airports
20190114/https://en.wikipedia.org/wiki/World%27s_busiest_airports_by_passenger_traffic
20190114/https://en.wikipedia.org/wiki/Hartsfield%E2%80%93Jackson_Atlanta_International_Airport

Automated people mover? Interesting name assigned to small autonomous trains, typically deployed at airports and cities too small to justify the expense and scale of a human-operated train.

20190114/https://en.wikipedia.org/wiki/The_Plane_Train
20190114/https://en.wikipedia.org/wiki/People_mover
20190114/https://en.wikipedia.org/wiki/Water_salute

Read on →

So, yes we do know that the popularity of driver’s licenses has been declining in the next generation for quite some years now. Do we have newer articles about the subject? Yes, we do, by a few years. Alas, that being said that this is a human generational decline, of course being only a few years newer doesn’t really tell you that much in terms of new information.

20190114/DuckDuckGo declining 16 year old driver’s license
20190114/https://www.theatlantic.com/technology/archive/2016/01/the-decline-of-the-drivers-license/425169/
20190114/http://time.com/money/4185441/millennials-drivers-licenses-gen-x/
20190114/http://fortune.com/2016/01/20/decline-drivers-licenses-for-millennials/

The few different articles I’ve found appear to repeat a lot of the same information, as if they were grabbing their details from the same sources. Yes, this is the source they were quoting:

20190118/https://deepblue.lib.umich.edu/bitstream/handle/2027.42/99124/102951.pdf?sequence=1

Anyways, these are some of the interesting details I wanted to summarize.

  • The percentage of people aged between 16 and 44 with a driver’s license has been decreasing steadily since 1983.

  • 16-year-olds licensed: 1983 had 46.2%, 2014 had 24.5%, down 47%

  • 19-year-olds licensed: 1983 87.3%, 2014 had 69%, down 21%

Read on →

If you have a region enclosed by a boundary contour, sure one easy way to compute the area inside of it is to do a “scan-fill” of the region and count the total number of pixels or voxels, but what about a resolution-independent vector computation approach? The obvious first step would be to tesselate down the region into triangles or tetrahedrons, for which the area can be computed trivially in a vector manner. But how do you get there? One approach that I thought of is by progressive convex hull determination. First you determine the convex hole of the entire region, calculate the area of that, then you subtract convex hulls that cut into it to make a concave region. If you have additional cuts on each of those, you add back the area of those convex hulls. Keep doing this until you recurse down to the last convex sub-shapes.

Another, purportedly more popular, alternative is to compute the Delaunay triangulation of the region, then simply add up the area or volume of those triangles. Except for the triangulation part, it is otherwise easy and simple, conceptually speaking.

20190114/https://en.wikipedia.org/wiki/Delaunay_triangulation
20190114/https://en.wikipedia.org/wiki/Constrained_Delaunay_triangulation
20190114/https://en.wikipedia.org/wiki/Chew%27s_second_algorithm
20190114/https://en.wikipedia.org/wiki/File:LakeMichiganMesh.png

Linked lists done right. Motivation? Build efficient, reusable linked list code.

This is simple to implement in code, but what is missing is compile-time optimizations. Rather than having linked list pointers point to the head of the container data structure, they point to the next pointer in sequence itself. This makes the pointers independent of the specific implementation of the data structures, so that the code for navigating linked lists is therefore generic and can be reused.

The next higher level up is referencing members of the node data structure. Here is the main optimization at this point. Define an arithmetic expression with only add and subtract constants and one variable. The amount you add or subtract depends on which field you want to access. The variable is the yet-unknown address of the final data structure. Only when you convert that to an actual pointer to dereference must you compute the full value. This allows you to save some extra and unnecessary pointer dereferencing and arithmetic.

In some cases this means that you never need to allocate storage for a variable. You simply use the storage of one variable and an arithmetic expression to “load” the other variable.

Read on →