Languages and scripts?

Discuss all aspects of programming here.

Moderator: The Mod Squad

Postby atang1 » Sat Sep 11, 2004 11:34 pm

I believe the curiculum would be an exposure to all the known languages. Then you look at platform, operating system, and languages and scripts applicable today.

After that C, C++, and C#, Java. PHP/Perl, and UML.

United modelling language is executable and is the link between hardware and software for embebded systems.

It may help also by understanding of assembly language. Which is very slow to program, but has the tightest codes possible. It avoids all the compiler overheads(platform virtual machines).
atang1
 

Postby atang1 » Tue Sep 21, 2004 8:48 am

But more than Languages and script is to write the content to be programmed. We call them wizzards or modules of database.

For instance, form letters of all kinds are wizzards. If you have a text editor or a word processor, your wizzards give you a form letter which you can alter a little and send it out as your personal letter.

For webpages, you also need wizzards to change and make it your own webpage.

The wizzards are then prearranged programs to be used to save time. You eventually would wish that all things can be done automatically as you wished?

So you take all the wizzards and assign them an instruction word. Then when you type "sa", you get a form letter as an email and send to all your customers that email will sell your amplifier product. "sa" will be the micro to do the email. You need an middleware which has the program to do the changes in form letter, interface with the customer list, and batch process the email broadcast or webcast.

The program to wirte the program to do "sa" is in the VHLD language.

This comes back to middleware or object technology which is to program, a program to build a program. Automation in software is a program to write a program, which is the object.

So, between 1993 and 1996, the industry worked out the VHLD langusge in SAP after the object technology was understood. The middlwware industry was born. IBM is the major force behind data mining(modules in DB2 database) and object technology(VHLD language). VHLD language includes search engine, download manager and batch processing scripts.
atang1
 

Postby thomas_w_bowman » Tue Sep 21, 2004 10:28 am

It may help also by understanding of assembly language. Which is very slow to program, but has the tightest codes possible. It avoids all the compiler overheads(platform virtual machines).


To me Assembler is like knowing what "the keys" are in the Computer, what functions will underlie all that occurs - the smallest building blocks that a Computer has, and ultimately what the CPU will actually process.

Atang1 has selected some very good language and script choices, when one understands the instruction set (assembler) and can relate it to a repeated task - macros can be developed to avoid need to code the assembler commands over and over - ultimately the wizards are programs that will group existing code with appropiate data to do what one wants to do without having to know any assembler at all (even SQL, which is a DB2 Query/Update language).

The biggest issue will be knowing what will need to be done with the computing application(s) - knowing what the 'User' wants. Thus it generally will develop that whatever set of users you learn about, and what their requirements involve, will make a given programmer more (or less) useful to specific sets of Users.

For example, my BS is in Accounting, and although I also studied programming - I am most useful to those using the Computer for accounting functions (in my case, ending up working with Mainframes, and looking at networks as interfaces). If one were to involve themselves with networks, and learn of their strengths and weaknesses - need for some redundancy and the balance between cost and risk... one might be most useful in designing, implementing, and modifying networks. Same for Marketing, Data Mining, or any of the many aspects of Computing.

This is actually Microsoft's weakness, since they assume that they understand requirements - but then to err on the side of assuming that all Computer users are stupid, thus restricting the wizards from being flexible enough for many who want to configure with more detail, or perhaps less regard to flashy graphics and more focus on data handling and when programs should load or unload. Microsoft's other weakness is from their strength - Marketing - for most effective Marketing, new programs must be purchased as quickly as possible - too often driven by vision that does not consider need for continued support for old features, which will tend to annoy developers needing to constantly upgrade and actually rewrite their work too often to allow for stability or meaningful enhancements, substutiting Marketing's ideas of enhancement, which tend to comnsume excessive resources such as memory, CPU cycles and disk space.

So it will be useful to understand Unix/Linux, and possibly other operating systems, kow what they have in common and how they differ - as well as assembly. You may not use this knowledge often, but the understanding will help develop a perspective of "what matters" and what can be passed directly to the OS in terms of function (to aviod 'inventing the wheel', much like in the 1980's when BASIC programs used 'Peeks and Pokes' - getting too close to the hardware to be maintainable, where most often an OS function could've handled the requirement and would have allowed the OS changes to acually upgrade the software without need to recode...

You were looking for a simple list, no doubt. But to be really useful in a computing environment, you will need to "Learn how to use a manual", rather than memorize a few conventions to be able to keep up, since the business of Technology is about change - but the users still want what they want and really care very little about technology, as long as it delivers what they require. Such work is all about fullfilling requirements, which has to start with defining and evaluating requirements - and documenting what you will deliver in terms of specific function (or you might be held accountable for changes that you did not realize that needed to be made).
Better living thru technology...
"Open the Pod Bay Doors, HAL..."
Join Folding team #: 33258
thomas_w_bowman
Black Belt 2nd Degree
Black Belt 2nd Degree
 
Posts: 2884
Joined: Fri Feb 28, 2003 2:59 pm
Location: Minneapolis, MN

Postby atang1 » Wed Sep 22, 2004 2:51 am

You did not touch on quality control of software, single vs. double entry, because you are trained in accounting. People are using cookies to verify your entered profile for trust security?

The future of software programming is tight codes and efficiency in automation, which is always you write a program that can write another program to do your job automatically. Today, too many programming is just to do a job without automatically adapting to different jobs.

So, it takes much teaching of philosophy to program computers that can handle any wish you may have. The first step perhaps is embedded aproach to livecd which can do one job well. Then many livecd(s) became wizzards. From these wizzards, we can have every wish accomplished from a carousel.

Or you can have a list of scripts that can assemble all the software modules to run a webserver or run a movie house, or run an body building exercise machine; all within the operating system? Currently, we can call for office functions, or internet surfing, etc. We have to go to the next level of full functionality. You can use removable storage(mobile rack hdd, floppy, USB drives, etc) for convenience.
atang1
 

Postby atang1 » Fri Sep 24, 2004 12:51 pm

For people who are really interested in tight codes and entry level software, study tcl/tk compiler/interpreter.

Tcl/tk crosses platforms. But Win3.1/32 bit is no longer supported since tcl/tk 8.0 version.

A famous linux dialer(Portugese language buttons), pppliga.tcl is often studied. There are two versions, 9k and 12k bytes long.
atang1
 

Postby atang1 » Sun Sep 26, 2004 2:12 am

When we program mainframe computers, we use memory address switching, hook points. When we program PCs we use FAT table addresses and hook points . When we program enterprise systems, we have to use IPaddress, FAT tables and hook points.

With Linux going dram centric, IP address is now the name of the memory bank, in order to remotely address the memory bank.

All programming languages and scripts will have to address the difference in the enterprise system? Another layer in the packet header.
atang1
 

Postby thomas_w_bowman » Mon Sep 27, 2004 7:12 am

Yes, and as you point out - compilers can make compatibility 'transparent' to a point, by generating the appropiate machine instructions for the source code.

All of the hardware has in common some binary method of resolving addresses for memory and devices, which is easier for us to deal with as Hexadecimal. And thank goodness that compilers and scripts can translate the binary values to something that is meaningful even to non-technical persons.

And a good system offers all needed options in some manner easily referenced and controlled using familiar language to us - and 'less mature' systems may require use of coding or scripts, because the machine level itself is nearly useless without some form of easy to access interface.

However, sometimes there can be so many icons, symbols, and choices as to begin to define a 'new' language, which can overwhelm and become cumbersome quickly because the many options can be like an alphabet with thousands of letters. This is why many programs allow user-defined toolbars (thus we can avoid what we seldom need and keep the options to a managable level). Just as bad is when there is a lack of means to access feature(s) - that's when Windows users are using REGEDIT, for example...
Better living thru technology...
"Open the Pod Bay Doors, HAL..."
Join Folding team #: 33258
thomas_w_bowman
Black Belt 2nd Degree
Black Belt 2nd Degree
 
Posts: 2884
Joined: Fri Feb 28, 2003 2:59 pm
Location: Minneapolis, MN

Postby atang1 » Sat Oct 02, 2004 1:10 am

Now that I am actively involved with some of the stripped down distros of Linux, I am going back to tiny cobol compiler/interpreter tcl/tk language.

It has been many years of development in this elementary language. And it was intended to cross platforms. So, we can adapt many fine programs for stripped down Linux operating system.

To begin to learn tcl/tk is to look at what has already been written.

In tcl/tk FAQ#4, there is a paxkage catalog of fairly substantial listing. Since it is a volunteer effort, it could still be building up each and every day.

The beauty is that any program even for large computers or networks can cross platforms. And cobol is easy to read.

The problem is that tcl/tk versions themselves have to satisfy dependency and have device drivers compatible in the operating system versions.

Have fun? These programs are free.
atang1
 

Postby atang1 » Sun Oct 03, 2004 4:49 am

Now that we discover the fine way to study or drop in the Linux free 18,000+ software pcakages; the world of Linux is fairly small.

They are all on the internet, if you only take the trouble to search for the catalogs.

You search them by the programming languages and scripts.

Perl, Php, Gcc, Python, etc.

Or cross reference search by the functions needed for your programs? Keyword plus keywords to limit the choices. You might even limit the length of the shared codes, or compressed code length of any similar software?

Are we having fun now?
atang1
 

Postby atang1 » Thu Oct 07, 2004 8:03 pm

Mainframe software had always been behind because of legacy data protection. Now, IBM had jumped off standalone software and went ahead with shared service modules in Websphere. The newer concept of shared module started with BSD to avoid copyright infringement charges by AT&T on their Unix. Shared module have the benefit of write it once and use it forever. But you could compile five hundred modules for a simple program.

Many tiny Linux had to use stand alone software due to the efficiency in codes. For entry level software this is perfect. But, in some cases where the feartured function malfunctioned, the feature should be stripped off, then recoding will replace that function. Recoding is easier then merely adding "if then else" statemments which itself could cause conflicts.

So, programming has been changing, shared modules are no longer adequate. We have to have automation of prefetched data for speed of computation, Cache, caches all over.

We have to have automation which is dynamic as we use our computer. Batch processing of scripts, decompression on the fly including key interpretation of many bit encription. Even shadowed bios wait state can be changed on the fly to compensate for the temperature rise in the computer case. Browser centric in addition to dram centric is the outlook on the computer architecture.

The future is less mouse clicks and less keystrokes. To achieve that, we have to redesign the software programming approaches. Keystroke caches had been done by Microsoft for many years. They only missed automatiically caches and rewrites the software to run the programs with cached keystrokes already put in the right dialog boxes. Repeat duties in software can be automated easily by filling in the dialog boxes at the right time without errors.

So, we are able to look forward to speed in software execution.
atang1
 

PreviousNext

Return to Programming

Who is online

Users browsing this forum: No registered users and 1 guest