<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d7648801\x26blogName\x3dThoughtus+Confoundus\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://thoughtusconfoundus.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://thoughtusconfoundus.blogspot.com/\x26vt\x3d-65323157925501362', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>
Thoughtus Confoundus

High security zone for hazardous thoughts. Think many many times before reading. If you're lucky you'll get away with thinking its plain crap. The author accepts no responsibility for induced insanity. 

Wednesday, October 05, 2005

2 Factor...

With the bank in the hood dishing out these cute little gadgets there's been the predictable spike in interest in 2 factor authentication. The master has this & this to say. A low down on token based 2 factor authentication can be found here.

Schneier sums up by saying something like ".. 2 factor authentication solves the problem of authentication, and hence mitigates some security concerns. But it does not solve non- authentication related vulnerabilities, which unfortunately are the more common. So 2-factor authentication does not infact help much".

Let me present an alternative perspective: 2 factor authentication solves the problem of authentication data compromise at the service-provider-end.

Let's say that a banks authentication data store has been compromised. Take the case in which the bank does not have 2 factor authentication; there is nothing preventing an attacker from using the authentication data.

Now, take the case where the bank does have 2 factor authentication. Then, even if the authentication data (user names, passwords, PINs..) are compromised the attacker is still in the dark, since part of the key is still with the user(i.e. the token which generates a one time password, or a digital certificate located in a smart card). I am assuming that the data store for any secret keys that the 2nd factor uses, is in a separate data store at the bank-end.

So 2 factor authentication guards against the banks authentication data being compromised. It effectively closes one door, in terms of the banks liability.

In contrast to this, authentication data compromise at the user end through, as schneier points out, Man in the Middle attacks and Trojans as well as just nicking the piece of paper your username and password is written on along with the 2-factor authentication token, is not completely solved at the user end. Mitigated, because you have to steal a token or log another key sequence - but not eliminated.

The neat thing is that the bank is effectively securing their end of the stable by introducing a gadget to be used by the customer. I don't know whether to feel awed or used...
12:05 AM

|

Monday, September 19, 2005

Articles 'n stuff

The key to a good read is abstraction. Extract the rules; the rules should stand alone; not dependant on the details. The details should be given its due place - the lowest rung in the ladder. A good read charms the reader into taking with him, the rules.

I would much rather write a Knuth than a Fowler.
7:39 PM

|


Bridging the Gap

The following discourse is limited to entry level technical skills required of graduates in the computing/computer science field within a Sri Lankan context.

Let us commence with the premise - "The goal of (computing/computer science) higher education is to produce employable graduates". If we accept this premise then we must also accept that we have, in no small measure, failed to deliver the goods.

We still hear complaints from the industry that our graduates are not upto-the-mark. That they lack the skills needed in a commercial environment. That there is significant overhead in training them. That industry alone carries the burden of gearing graduates for employability. That they do NOT hit the industry running. This is the reality.

The university side of the story is, that various measures have been put into place to lessen the academic-industry gap. Industry collaborations, industrial placement, Guest/visiting lectures from industry experts, are just a few of the measures adapted.

But nevertheless, within the first few months of induction to an industrial environment new graduates undergo a type of culture shock due the totally alien manner in which enterprise development is done, in contrast to small system development methods they were exposed to as part of coursework. Although the atomic skills are sufficient they lack an overall big-picture-view and find themselves in a new environment with new tools, new practices and bigger problem domains and solution spaces.


So what then, is going wrong?


Let us dissect the problem from an industry angle. What type of software development does the sri lankan software industry engage in; in a broad sense? My guess is that the vast majority would be business application development. It is only a guess, ofcourse. So what would industry consider as ready-to-run skills of a graduate who would be put to work developing a business solution? Again a guess; my guess would be, that industry would require

1) That graduates possess competence in programming, networking, databases. These 3 broad areas can be thought of as the de-facto core competencies.
2) That graduates have mastered at least one industry level programming language(Java,c#,c++, in decreasing order of popularity)
3) That graduates be familiar with at least one DBMS(Oracle,MS-SQL,MySql/Postgres, in decreasing order of popularity) and be fluent in SQL.
4) That graduates be familiar with at least one web technology(JSP,ASP.Net,PHP...) and have a broad understanding of most.
5) That graduates be familiar with at least one development framework(J2EE,LAMP,.Net,MFC...)and have a broad understanding of the main frameworks.
6) That graduates have a broad understanding of SE/RAD/Enterprise Application Development principles as well as SE/RAD/Enterprise Application principles within the context of a development framework
7) That graduates be aware of and have been exposed to popular development platforms(Eclipse, Visual Studio...)

<aside>
These 7 requirements would be the constituent elements of the broad category known as technical skills. Infact the broad expectations of industry would be that graduates have,

1. Strong problem solving skills
2. Strong communication and interpersonal skills
3. Strong technical skills

Now that we have placed technical skills within a broader picture, we can consider whether university education infact gears graduates for these 7 entry level requirements.


</aside>


Looking at modules for computing programs at IIT and UCSC, we can safely assume that requirements 1&2 are developed from the freshman year itself. A quick perusal of the final year modules would indicate that requirements 3&4 are also accounted for. Even requirement 6 has been partially dealt with by the presence of modules such as SE and RAD.

Overall, university curriculums appear weak in tackling requirements 5,6&7. The essence of requirements 5,6&7 is that graduates should be ready-to-run with current frameworks, development practices and tools for enterprise business application development. Why? As in, why is this so important? Because that is the main revenue generation activity of industry - enterprise business application development.

So why does our higher education system not tackle these Enterprise Application Development requirements? Instead, why are the final year modules of a typical computing degree in SL heavily leaning towards Artificial intelligence, Computer Graphics, Formal Methods,Pattern Recognition etc... How helpful are these topics in gearing graduates towards industry?

There may be 2 reasons why universities shy away from a module such as Principles & Best Practices in Enterprise Application Development.
1) There is no room for such a topic when you fit in the classical modules
2) Such a module is uncomfortably close to certification courses such as MCSD,JCP...

The second reason is the more serious barrier. However, I believe that there is room for an academic treatment for concepts such as layering, Inversion of Control frameworks, Controller Patterns etc... It is however important to be as unbiased as possible considering the polarized nature of application frameworks.

Let's see what a module like Principles & Best Practices in Enterprise Application Development should have as its aims & objectives

Aims
1. Introduce students to the architecture of the most popular application frameworks
2. Introduce students to the principles of Enterprise Application Development
3. Introduce students to industrial level Application Development Platforms

Objectives

1. Students should be able to critically evaluate application frameworks
2. Students should be able to develop an enterprise level application using a popular framework conforming to enterprise application development best practices
3. Students should be able to critically evaluate Application Development Platforms and leverage their power for application development

Indicative Content
A discourse on the J2EE, .Net, LAMP architectures. Patterns used in J2EE, .Net execution model(front controller, page controller...).Layering principles used in Enterprise Application development and framework support for layering( Session/Entity beans). A discourse on IDEs and IoC frameworks.


Having said all that, it is not my intention to advocate an obliteration of the AI, Machine Learning, Pattern Recognition type subjects. Ofcourse these subjects are needed for a healthy academic process. I merely suggest side-lining one or two such subjects in favour of a subject that is directly relevant to the employability of a graduate.


The problem lies at the academic-industry boundary; the final year of a computing/computer science degree program. The problem itself is graduates not having enterprise application development skills. The solution is simple - provide them these skills with an appropriate/relevant final year module.

Having done a once over, I realize that this is an (rather long winded)oversimplification of a multi-faceted problem. But even this oversimplified, naive solution is a step in the right direction...
6:34 PM

|

Wednesday, May 18, 2005

The mechanism of the Web must change

The problem with the pre-dominantly pull mechanism of the web is that large amounts of data is shuttled towards browsers regardless of the clients previous access of such data. rss solves this problem when in comes to purely informative web content. That still leaves web applications which push whole web pages towards browsers.

A web application can be broken down to its presentation and business logic. The norm is to meld these two aspects on the server and push the content towards the browser. I propose to have a separate presentation layer capable of manipulating data provided in XML format. The presentation layer is downloaded to the client initially. Subsequent access to the same web application will require only the data.

Not only will this reduce the traffic between client and server, but also make the development task simpler and in-line with the webservices development model.

Just a few initial thoughts....
7:10 PM

|

Friday, July 16, 2004

Simplicity

Graphs are the simplest of trees - simple and general. And we see the chink in the XML armor.

"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." - Albert Einstein

errrr.... not saying i'm a genius. Any real solution has to be simple. thats all.
4:21 PM

|


XML Must DIE

I have never known and probably never will know, what the dickens the big deal is with XML anyway!!!! It was always, Always! a minor technology; when compared to SGML, its almost insignificant. I ask myself, what is the purpose of XML.

The Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web.(Extensible Markup Language Activity Statement, http://www.w3.org/XML/Activity, May 2004)

So how does it 'meet the challenges of ..... publishing'? By providing a simple way of defining information - infomation meant for the web. And what is this simple method based on? A tree structure! I repeat, nay rewrite, A tree structure! This is the crux of the matter! & the fact that all of us have missed!

A basic premise of XML is, that all information used in an electronic environment can be represented using a tree structure. This is a hidden premise that skulks around hiding and jumps you from behind and delivers a sharp thwack on the head!

I cannot take credit for this point of view, it was brought up by a person at an opensource conference. It is unfortunate that i do not know this chap because the significance of his statement is well.... very significant!

a) What is so special about a tree structure? Has anyone proved that any kind of information can be represented by it?
b) Are there any formal methods for transformation of information encapsulated in trees?


I don't think so.

Also, it is very interesting to note that although industry has bought this XML concept lock, stock, and two smoking barrels, academia has been rather slow on the uptake. Might this be because XML has no solid universal theoretical foundation?

If this is the case, what kind of damage will XML do to the science of information representation? By how many years will the discovery of a universal information representation scheme, be delayed?

The only redeeming quality of XML is that it is very 'tightly defined' and very 'clean'- Strong syntax & semantics. However, it gives us an illusion of simplicity. After all the very first and only scheme to bring some order to the electronic information mayhem, will undoubtedly 'seem' simple. Anything would!

And isn't it fishy that the .Net webservices rely heavily on XML?

What we need is a genuinely simple universal solution to information representation. It's out there somewhere! But before it is discovered,....

XML MUST DIE!
4:09 PM

|


Open Source..... My Take

OpenSource - the Woodstock of the, 2000...... (Oh Frink! that doesn't go!) Primogenial Decade of the Third Millennium ( now that's funky!). And it's a Woodstock minus all those annoying babies (am I getting things mixed up - Baby Boomers were linked to Woodstock weren't they?). My thoughts, well, how do I put this mildly? REVOLUTION!!!!.

But, ( I really love that but- not the word 'but', but 'that' particular 'but' - you know the one at the beginning of this sentence - it should be pronounced with emphasis for full effect & very loud - shout ) not in any way that most people think its a revolution. The most popular pro-opensource arguments are:

1. Linus' Law - "Given a lot of eyeballs, all problems become shallow" : This seems to be the primary reason why opensource software is so high in quality. Hence, this seems to be the way forward.
2. Hiding the source is an atrocity of humongous proportions from a consumer point of view - you simply cannot restrict the users rights in this way: The car with welded hood argument.
3. The equation argument: Do we pay Newtons descendents a royalty each time we use his Law of Gravitation?
4. We hate micro###$$@**%!soft - anything that challenges microsoft world dominance is Good! (This is by far the best of the lot btw)

Methinks, though, that opensource is way more important - While popular opinion is that open source IS the next big paradigm shift in software engineering, I think that opensource will give birth to Formal Computer Science . I know, I know, this is where you, the reader goes, - What the FRINK is this Hobo @? Is he an idiot or what? Doesn't he know that Computer Science is well established already?

Woah ...... But is it well established? Is it established in the sense that Physics or Chemsitry is established as scientific disciplines? I think NOT. I think that the maturity of computer science is equivalent to the maturity of Physics in the Galilaec Age. I strongly suspect that computer scientists are still waiting for the Computer Science equivalent of Newton to come along and set things in order!

Now the question is, "Is the appearance of Newton like figures, a completely random occurrence?" I think not. I think that the free availability of knowledge catalyzes the process of Newton- like figures popping up throughout history and Formalizing Scientific Disciplines. The Renaissance Age which led to the publication of books resulted in knowledge of a few 'alchemists' and 'wizards' becoming accessible to a larger group of individuals. This was the knowledge build up which resulted in Newton formalizing the scientific discipline of Physics.

Is this always the case? What about the Einsteinien Revolution? Didn't that happen in a time of war? When scientific discoveries were closely guarded secrets? Well the answer is yes it did. BUT (again I love this but), scientific information was made freely available for a significant scientific community & funding made available to that community. So even the Einsteinien revolution took place in an age where information WAS shared - albeit amongst a small community - but a community that was not confined to a single company or even country!

So here is the jist of what I have said so far:
1. Formalization of sciences comes in the wake of 'shared knowledge'.
2. Computer Science is yet to be formalized
3. So far a lot of computer science knowledge has been locked up in 'closed source' Software.

So here's how I see the score( finally - I can almost hear the sighs of relief);

The Opensource Paradigm is poised to kick the doors of knowledge in Computing wide open! This will lead to a renaissance in computing which will inevitably(hopefully) result in a formalized Science of Computing!


4:05 PM

|


Information Theory - Man Have we SCREWED UP!

Information Theory is seriously flawed! or at least severely limited. Any sound theory has to build on basics ,axioms, fundamentals - the real unexplainable. Any theory that is built on anything but the most primitive of axioms runs the risk of being limited. An example to illustrate the point, with apologies to Sir Isaac Newton & Albert Einstein. Newton assumed that 'time' was a very special quantity - an absolute whereas other fundamental measurements such as length and velocity were relative. Some time later Einstein comes along and questions the 'absoluteness' of time. He assumes that there is nothing at all very special about time - it's just the reading you get from a clock. There is no such thing as time flowing from the past & into the future. From this simple but 'fundamental' axiom comes the Theory of Special Relativity, which has superseded Newtonian mechanics.

I believe that we are taking obviously non-fundamental axioms to be the basis of information theory. This is a bit of a tongue in cheek statement because I have absolutely no knowledge of Information Theory - but what the heck eh? Why should I let a minor issue like that stop me?

As I understand it, contemporary Information Theory builds on the representation of binary or character data. BUT, methinks data representation takes a much more fundamental form.
Isn't the most basic representation of data, the strength and direction of an electric field? The physical manifestation of data in the human brain would be in the form of synaptic configurations characterized by the strength and direction of an E-M Field. Even the storage of binarised data in modern computers is at its most basic level characterized by the strength and direction of an E-M Field.

SO the conclusion: Any Theory on Information has to be based on Electro Magnetic Theory - Not on pure statistical quantization!

Note to self - Investigate Information Theory.
3:49 PM

|

© gumz 2005 - Powered for Blogger by Blogger Templates