Showing posts with label browser. Show all posts
Showing posts with label browser. Show all posts

Thursday, March 31, 2016

Mega.co.nz, web-based file streaming & Copy.com users

The following is an open letter to Mega.co.nz about implementing web-based audio file streaming (and attracting Copy.com users):

March 31, 2016
To the management (and developers) of Mega.co.nz:

In two months, on May 1, 2016 (as you may know), Barracuda Networks will close its (reportedly) "highly rated" Copy.com service.

They are directing "millions of users" to convert to Microsoft's similar service.

Instead, to attract some of those users, it might be in Mega.co.nz's best interest to implement certain Copy.com features. I'm thinking of one in particular:

Copy.com's web-based file manager directly automatically streams audio files (particularly Ogg-Vorbis files: those with extension OGG; and MP3 files).

Thus, whenever users shared (with other people) a web link to a directory tree on Copy.com, then the recipients, simply by navigating there, could stream that audio immediately and directly.

In other words, the recipients of the link could find and stream (in a web browser) any audio file: this without any additional (bothersome or worrisome) steps required; i.e., to:
  1. Download the audio file;
  2. Choose an audio player program; or even
  3. Install a special audio player for Ogg-Vorbis files.
In many cases—for many recipients—these additional steps can be show-stoppers.

This is particularly true in the case of public links.

Many Copy.com users would find this direct-streaming feature highly useful, IMO.

Mega.co.nz could attract more Copy.com users to their service by duplicating this feature.

Does Mega.co.nz plan to add this functionality—of direct-streaming Ogg-Vorbis (extension .ogg) or MP3 files—to their web-based file manager? Would Mega.co.nz's management consider it?

Already, Mega.co.nz's phone apps stream audio. In the web-based file manager, a need for this exists also, for the easiest possible access when sharing web links.

With warm regards,

Copyright (c) 2016 Mark D. Blackwell.

Tuesday, June 4, 2013

Blog posts' date position

Just now I was reading a blog to avoid emailing its author with questions they already blogged about. Like others with this purpose, I read it reverse-chronologically (i.e. from the top).

While reading a blog purposefully to learn the current status of a fast-changing software system it seems important to gather a quick sense of time context for each post.

Inevitably I observe myself sliding my browser window downward to the bottom of each post to get a sense of how long prior to the post above it each was released—just in case the time interval is much, much longer than those above.

Then I slide the window back, indeed with resulting uncertainty that I have recovered the proper beginning of the proper post.

Some blogs may never have a delay of more than two weeks between posts.

If I knew this were the case always I wouldn't even look. But since I am not sure, I find myself looking at the dates.

Viewing a blog's archive helps somewhat (and furthermore I can read a whole blog by clicking its posts in an archive list; but this seems less natural).

So the minor suggestion here, for blog formatters' consideration, comprises the usefulness of placing the date of each post immediately below its title.

Copyright (c) 2013 Mark D. Blackwell.

Monday, May 27, 2013

Robin Hodson compositions

A acquaintance of mine, Mr. Robin Hodson, has composed quite a number of choral and chamber works worthy of note. Not 'modern' music, these are quite listenable.

One can hear them free of charge on ScoreExchange. Just click the tab labeled 'Scorch plug-in', and install it if necessary. (BTW, Sibelius recently has made their plugin work better):

Truly quite excellent (especially harmonically) are:
  • 1993   Wind Quintet (1: Martial Fugue & Western Wind)
  • 2002   Verbum Caro Factum Est
  • 2003   English Missa Brevis
  • 2004   Ave Maria (SB duet)

Here's a chronological list (attempting to be complete) of his compositions (which are available on ScoreExchange):
  • 1986   This Is The Day
  • 1988   Missa Sancti Pauli
  • 1989   There Is No Rose
  • 1990   Death, Be Not Proud
  • 1993   Diaphonic Mass (organum)
  • 1993   Wind Quintet
  • 1997   Ave Verum
  • 2000   Funeral Sentences
  • 2001   Magnificat (Maryland Service)
  • 2002   Elegy for Strings
  • 2002   Nunc Dimittis (Maryland Service)
  • 2002   Verbum Caro Factum Est
  • 2003   English Missa Brevis
  • 2004   Ave Maria (SB duet)
  • 2004   Ave Regina Caelorum
  • 2004   Regina Caeli Laetare (Soprano, Piano, Cello)
  • 2008   Psalm 111: I Will Give Thanks Unto The Lord

Also I should mention the several CD releases (of steadily increasing quality) of his own popular music compositions. The 2008 album is uniformly excellent. Particularly excellent from his 2003 album are:
  • Hold Your Candle Over Me
  • Never Coming Home Again

Copyright (c) 2013 Mark D. Blackwell.

Tuesday, April 30, 2013

Essential jQUERY

Recently, I picked up the bare essentials in jQuery from the book, jQUERY Visual Quickstart Guide by Steven Holzner, Peachpit Press, 2009.

However, a word of warning: the book is somewhat badly edited, and there is no corrected edition (still as of this writing).

From the core jQuery source code, this page also is useful. Here are my brief notes:

JQuery refers to a certain syntax $(thing) for any thing as 'jQuery-wrapping'.

The keyword $ is an alias for jquery. Both are used in the following ways:

  • $(function)  –  Append a function to the list to be run when the document is ready: a shortcut for $(document).ready(function).

  • $(CSS-selector-string)  –  Select some nodes in the document.

  • $(HTML-string)  –  Create HTML for insertion.

  • $(DOM-node)  –  Like saying simply DOM-node, but change the value of this and set context (an attribute used by jQuery). Examples are:
    •   $(document)  –  The document.
    •   $(this)  –  this.

  • $.method  –  (This one has a dot and no parentheses.) Run a utility method.

The jQuery methods selected for explanation in the book are:

  • Methods on jQuery-wrapped collections of HTML elements:
    • addClass,  after,  alt,  animate,  append,  attr,  before,  bind,  clone,  css,  each,  (event binder methods),  fadeIn,  fadeOut,  fadeTo,  height,  hide,  hover,  html,  is,  (jQuery-UI methods),  length,  load,  one,  serializeArray,  show,  size,  slice,  slideDown,  slideToggle,  slideUp,  text,  toggle,  toggleClass,  unbind,  val,  width,  wrap

  • Event binder methods:
    • Keyboard   –   keydown,  keypress,  keyup

    • Mouse   –   mousedown,  mouseenter,  mouseleave,  mousemove,  mouseout,  mouseover,  mouseup

    • The rest   –   beforeunload,  blur,  change,  click,  dblclick,  error,  focus,  load,  resize,  scroll,  select,  submit,  unload

  • jQuery-UI methods:
    • accordian,  datepicker,  dialog,  progressbar,  slider,  tabs

  • Methods on jQuery-wrapped HTML strings:
    • insertAfter,  insertBefore

  • Utility methods:
    • ajax,  browser,  each,  get,  grep,  inArray,  isArray,  isFunction,  makeArray,  map,  post,  support,  trim,  unique

Copyright (c) 2013 Mark D. Blackwell.

Friday, March 22, 2013

StackExchange family (including Stackoverflow) doesn't care about signout

I again registered a test account for 'Log in with Stack Exchange'.

Disappointingly, after logging out of every (SE family) site, it still lets me log in again merely by button clicks, while I type nothing.

After more than a year, clearly the managers of the Stack Exchange network family of websites don't give a hoot about protecting users of shared computers with a truly effective signout.

What if some rude person at a party click-logged into and deleted someone's account, thus trashing their vast accumulation of reputation?

Copyright (c) 2013 Mark D. Blackwell.

Wednesday, November 7, 2012

Install Opa language on 32-bit Debian squeeze, howto

The coolest feature of the Opa web programming language is that it automatically divides developers' programs into server and client sides, compiling to JavaScript.

Though the Opa compiler (as of this writing) doesn't have a 32-bit binary for Windows, I got it working in an easy way on (32-bit) Debian squeeze, after upgrading my nodejs installation.

Following Opa's instructions to install as a user (under the heading, Other Linux Distribution), I downloaded and ran their 32-bit Linux self-extracting package. When prompted, I chose to install it into ~/progra/mlstate-opa.

Then, after navigating to A tour of Opa in the sidebar, under the heading, Easy Workflow, I found and typed into a file, 'hello.opa' their sample program. The command:

$ opa hello.opa --

errored out, asking for more npm modules to be installed.

Rather than exactly following their suggested course of action, which would have installed node modules to root-owned directories, I typed:

$ npm install mongodb formidable nodemailer simplesmtp imap

After that the compiler worked just fine.

Copyright (c) 2012 Mark D. Blackwell.

Friday, September 28, 2012

Frontend experience

Recently, I acquired some practical website frontend experience—which took quite a bit of learning!

For an initial demo for a startup, I analyzed, selected and set up all the infrastructure (Rails, Heroku & Amazon). I wrote all the CSS frontend. I also wrote all the working database backend.

See the demo! See how its layout is fluid?

(Click here, if you missed the above links.)

It doesn't have multiple user capability yet; it's just a demo, at this time.

I made this in the pursuit of becoming a does-everything website developer.

Copyright (c) 2012 Mark D. Blackwell.

Monday, September 17, 2012

Website page layouts, proofs of concept

A big part of frontend website development is implementing webpage layouts using CSS stylesheets (of course).

Recently, I've been experiencing a great deal more of business in the area of layouts (specifically for Rails websites) and especially the work of implementing these layouts through developing CSS stylesheets—whether or not this is really programming! (Well, I think it is.)

I find it much less efficient to run the Rails server, and much more efficient to 'web-browse' the local filesystem. The work progresses much more quickly, in other words, when it is isolated from any complicating factors arising from our misunderstanding of the Rails server, jQuery, ERB/HAML, and perhaps even Sass. The weightiest reason for this improvement (by far) is the troubleshooting principle: 'divide and conquer'. Less important is that the filesystem also is relatively quicker.

It is much more doable (dare I say, even feasible) to get isolated layouts working using pure CSS and HTML (while keeping class names simple). And the same is true while paring down a stylesheet to be as simple and clean as possible.

Of course, further simplifying cross-browser development is the use of a CSS-reset stylesheet. Also it is essential, for HTML5's semantic tags: header, footer and nav (etc.), to include a (JavaScript) HTML5 shim (or 'shiv') script. So I include both of these best practices.

I have prepared a repository of my CSS (layout) proofs of concept on GitHub—including nine(!) useful proofs (as of now, September, 2012).

These layout proofs contain stylesheet code the way I write for Rails projects as much as possible (without actually including Rails).

Copyright (c) 2012 Mark D. Blackwell.

Monday, August 27, 2012

Crisp image edges in web browsers, howto

Sometimes, website creation frontend work involves extracting images from pages rendered by browsers. These pages may be wireframes, for instance.

Of course, it is appropriate that web pages (displayed in a browser) contain some blurring for good looks (which becomes plainly visible if blown up to 1600% by Photoshop, etc.)

Of course, it is appropriate also that some images of a wireframe (such as icons) be blurred, because icons are created normally by a dithering process.

Although image blurring (for demonstration purposes) is appropriate and has a good look, such additional blurring is bad when images are extracted for reuse on a webpage, because the blurring will then happen twice (a doubled blurring will result).

To avoid this double-blurred problem, and for pixel art, the following method will set up for you a web browser which does not blur images:
  1. Download and install the latest SeaMonkey web browser:

    http://www.seamonkey-project.org/releases/

  2. For your particular operating system, locate your profile folder by reading:

    http://www.gemal.dk/mozilla/profile.html

  3. Immediately below your profile folder, make sure a folder exists named, 'chrome' (not the Google browser), and that a file exists in the chrome folder called, 'userContent.css' (or create them).

  4. Append to userContent.css the following lines: all are for resampling of images by the desired (in this case) nearest-neighbor method:

    (Note: I leave intact (below) some other browsers' settings for this, just in case you want to add these lines to your particular browser, in whatever way.)
/*
Gecko (Firefox & Seamonkey)
Webkit (Chrome & Safari)
*/
img {
image-rendering: optimizeSpeed;             /* Older Gecko */
image-rendering: optimize-contrast;         /* CSS3 draft proposal */
image-rendering: -webkit-optimize-contrast; /* Webkit */
image-rendering: crisp-edges;               /* CSS3 draft proposal */
image-rendering: -moz-crisp-edges;          /* Gecko */
image-rendering: -o-crisp-edges;            /* Opera */
-ms-interpolation-mode: nearest-neighbor;   /* IE8+ */
}
References:
http://help.dottoro.com/lcuiiosk.php
https://github.com/thoughtbot/bourbon/pull/102
http://productforums.google.com/forum/#!topic/chrome/AIihdmfPNvE
https://bugzilla.mozilla.org/show_bug.cgi/show_bug.cgi?id=41975
https://developer.mozilla.org/en-US/docs/CSS/Image-rendering http://www-archive.mozilla.org/unix/customizing.html#usercss
http://stackoverflow.com/questions/7615009/disable-interpolation-when-scaling-a-canvas
http://nullsleep.tumblr.com/post/16417178705/how-to-disable-image-smoothing-in-modern-web-browsers
http://www.w3.org/TR/2011/WD-css3-images-20110712/#image-rendering

Copyright (c) 2012 Mark D. Blackwell.

Saturday, July 7, 2012

Manage long-running external webservice requests from Rails apps (on cloud servers), howto

Case: (as long as Rails is synchronous) requests to external webservices take the use of server resources to impossible levels, even when webservices behave normally—let alone when they are long delayed.

Plan: two web apps (one Rails, the other async Sinatra) can fairly easily manage the problem of external web service requests by minimizing use of server resources—without abandoning normal, threaded, synchronous Rails. The async Sinatra web app can be a separate business, even a moneymaking one.

This solution uses RabbitMQ, Memcache and PusherApp.

The async Sinatra web dynos (on the one hand) comprise external webservice request brokers. Also they have browser-facing functionality for signing up webmasters.

The Rails web dynos don't wait (on the other hand) for external webservices and they aren't short-polled by browsers.

This attempts to be efficient and robust. It should speed up heavily loaded servers while remaining within the mainstream of the Rails Way as much as possible.

E.g. it tries hard not to Pusherize browsers more than once for the case that a cached response to an external webservice was missed, but relies on browser short-polling after perhaps a 10-second timeout to cover these and other unusual cases.

But in the normal case browser short-polling will be avoided so Rails server response time should be peppy.

It tries to delete its temporary work from memcache but even if something is missed, memcache times out its data eventually so too much garbage won't pile up there.

Note: this is for web services without terribly large responses (thus appropriate for memcaching). Very large responses and non-idempotent services should be handled another way such as supplying them directly to the browser.

Method: the Rails web app dynos immediately use memcached external webservice responses if the URL's match.

Otherwise they push the URL of each external webservice request and an associated PusherApp channel ID (for eventually informing the browser) to a RabbitMQ Exchange.

For security purposes, minimal information is passed through PusherApp to the browser (only suggesting a short-poll now, not where).

The Rails web dynos (if necessary) return an incomplete page to the browser as usual (for completion with AJAX).

To cover cases where something got dropped the browser should short-poll the Rails app after a longish timeout—its length should be set by an environment variable and may be shortened to half a second when the Rails website is not terribly active, or when the async Sinatra web dynos are scaled down to off.

Each async Sinatra web dyno attaches a queue to the Rails app's RabbitMQ exchange for accepting messages without confirmation.

With each queued message, an async Sinatra web dyno:
  1. Checks the memcache for the external webservice request (with response)—if present, it:
    • Drops the message. (Some may slip through and be multiply-processed, but that's okay.)
    • Frees memcache of the request (without response) if it still exists (see below).
    Otherwise it checks the memcache for the external webservice request—without response. If recently memcached (perhaps within 10 seconds) it drops the message. (Some may slip through and be multiply-processed, but that's okay.)
    Otherwise it makes the request to the external webservice, setting a generous response timeout (maybe 60 seconds).
  2. Memcaches the external webservice request (without response) with the current time (not in the key).
  3. If the request times out, drops it in favor of letting the browser handle the problem, but leaves the memcached external webservice request (without response) for later viewing by async Sinatra web dynos.
  4. (Usually) receives a response from the external webservice request.
  5. Again checks memcache for the external webservice request (combined with the same response). If it's not there:
    • Pusherizes the appropriate browser. (Some requests may be multiply-processed, but that's okay.)
    • Memcaches the external webservice request (with response).
    • Clears from memcache the external webservice request without response.
The browser then asks the Rails web dyno to supply all available AJAX updates.

The Rails web dyno returns (usually incomplete: whatever is memcached—some may have been dropped, but that's okay) a set of still-needed AJAX responses to the browser (for further completion with AJAX).

Or (if all were memcached) the Rails web dynos return the complete set of outstanding AJAX responses to the browser.

I'm starting to implement this here, now.
Copyright (c) 2012 Mark D. Blackwell.

Friday, December 16, 2011

Chrome browser cookie exceptions, howto

Google's fast, new Chrome browser has a development process that seems to fix problems rapidly.

An unmet need in the user interface, however, is explicating (right on the page) how to enter domains for cookie handling exceptions.

The proper way to be secure (though you may disagree) IMHO follows:

* Wrench-Options-Under the Hood-Content Settings-Cookies.
* Select, `Block sites from setting any data'.
* Check, `Block third party cookies from being set'.
* Check, `Clear cookies and other site and plug-in data when I close my browser'.
* Click, `Manage Exceptions...'.
* Add and delete hostname patterns until you see what you like.

The problem is, it gives no subdomain examples. Per one bug report, the user cannot figure out the right syntax to enter them. Without adding the right subdomains, navigating to blogger.com mysteriously redirects us in a loop.

Its unusual wildcard syntax, for Google blogging, is:

[*.]blogger.com
[*.]google.com

The brackets must be entered explicitly. In other words, they do not merely indicate optional content.

Some kind of, `"Learn more" about the pattern syntax' link would be awesome.

Explicitly also, one could enter all the relevant subdomains (which is appropriate for some domains):

blogger.com
www.blogger.com
markdblackwell.blogspot.com
google.com
accounts.google.com

Copyright (c) 2011 Mark D. Blackwell.