tag:blogger.com,1999:blog-54241087358270072072024-02-06T18:45:13.564-08:00Optimal BPMThis blog is about the development of Optimal BPM, a Python-based Business Process Management system that build upon the shoulders of the myriads of open source projects that deals with parsing and movement of data.<br>
So, In the words of someone about to fail big time:<br>
<b>"It should be a breeze, really." :-)</b><br>Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.comBlogger29125tag:blogger.com,1999:blog-5424108735827007207.post-835328536740804642015-03-22T18:33:00.005-07:002015-03-22T18:33:54.470-07:00MBE, Mongo Back End. <h2 style="text-align: center;">
<br /></h2>
<h2 style="text-align: center;">
First MBE version out</h2>
<br /><div>
<br /></div>
<br /><br />MBE is a backend written in Python 3 that provides database access, authentication, access control and other features on top of the MongoDB document database.<br /><br />It is basically a blatant attempt of having the cake and eating it, to explore the possibilities given when combining the absolute freedom of a document database with the fine-grained control of a JSON Schema.<div>
<br />It doesn't hurt, of course that the same JSON-schema can be used to validate data directly in the web client interface, for example using <a href="https://github.com/Textalk/angular-schema-form">angular-schema-form</a>.<br /><br />Conceptual documentation can be found at <a href="http://www.optimalbpm.se/wiki/index.php/MBE">http://www.optimalbpm.se/wiki/index.php/MBE</a> .</div>
<div>
<br /><br />API documentation can be found at <a href="http://www.optimalbpm.se/api/mbe/index.html">http://www.optimalbpm.se/api/mbe/index.html</a> .<br /><br />The source as well as a very simple example is found at <a href="https://github.com/OptimalBPM/mbe">https://github.com/OptimalBPM/mbe</a> .<br /><br /><br /><br /><div>
Good times!</div>
<div>
<br /></div>
</div>
Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-4587155439676189212014-11-19T14:59:00.000-08:002014-11-19T15:08:34.512-08:00More WiX articles on Python problems and blogging location<br />
First:<br />
<br />
I felt it was time to clear up some more of the WiX-situation with regards to pip on windows.<br />
As, for example, both pyodbc and lxml simply won't install without vcvarsall.bat, several packages aren't installable via pip on Windows.<br />
<br />
The most elegant solution to this is not installing Visual Studio on all machines you want to run your application, but to build .msi-installers that are easily included in your installation.<br />
<br />
So don't wait, but go and see the article on the Optimal BPM wiki: <a href="http://www.optimalbpm.se/wiki/index.php/Python_packages_from_exe_to_WiX_to_msi">How to make .msi packages from Pythonlib .exe-files.</a><br />
<br />
I can answer some questions in the comments here if needed.<br />
<br />
<br />
Second: <br />
<br />
This blog will move to optimalbpm.se soon.<br />
<br />
Bye!Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-53370835209048116612014-11-05T05:15:00.002-08:002014-11-19T14:59:54.981-08:00MongoDB,BPM, QAL 0.4.0, Optimal Sync 0.9.0 and improved WiX article<h3>
Releases</h3>
Recently, <a href="http://www.optimalbpm.se/wiki/index.php/QAL">QAL</a> and <a href="http://www.optimalbpm.se/wiki/index.php/Optimal_Sync">Optimal Sync(Optimal BPM Tools)</a> was released in new versions(<a href="https://sourceforge.net/p/qal/blog/2014/10/qal-040-is-out/">QAL</a>, <a href="https://sourceforge.net/p/optimalbpm/news/2014/11/optimal-bpm-tools-090--or-perhaps-mostly-optimal-sync/">Optimal Sync</a>). Many a new features are included and especially Optimal Sync is shaping up to become a pretty useful tool. Especially, the Windows installation of Optimal BPM Tools(Optimal Sync) is much improved.<br />
<br />
<h3>
MongoDB</h3>
I am currently investigating using MongoDB as a database backend for the system. There is currently a backend written that uses a EAV structure that works in all backends(DB2, Oracle, Postgres, MySQL, SQL server).<br />
<br />
However, since Optimal BPM is geared towards simplicity, it is pointless to make the users learn installing and maintaining a complex relational database backend, when possibly, there are better options.<br />
<br />
The document-oriented approach of MongoDB seems, at least so far, be a perfect fit for a BPM system. All possible possible issues, like data integrity, schema upgrade issues and similar concerns, are under investigation and hopefully resolved this week.<br />
<br />
This means a couple of things for Optimal BPM:<br />
<ol>
<li>It will be even easier to install.</li>
<li>It can have a far simpler data layer.</li>
<li>The system only needs to support one database backend for installations.</li>
<li>QAL doesn't need to support more DDL functionality, and can evolve further into the data mining and transformation direction in a more clean manner.</li>
<li>The UBE project, and its upgrader functionality, can be discontinued.</li>
</ol>
<br />
<br />
A surprising way of using Optimal Sync, if I choose to develop that functionality, is to make it able to transform from an old MongoDB logical schema to a new, using a old JSON-schema and a new JSON-schema and map the differences. This way one can handle database structure upgrades without having to write scary upgrade scripts. <br />
<br />
<br />
<h3>
WiX</h3>
The bane of many a developer, WiX, has now been honored by an improved write-up at the wiki. The script/custom action section has been moved to its own article: <a href="http://www.optimalbpm.se/wiki/index.php/Wix_and_scripting_languages"> http://www.optimalbpm.se/wiki/index.php/Wix_and_scripting_languages</a> .<br />
The <a href="http://www.optimalbpm.se/wiki/index.php/WiX">old article</a> is now more as it is supposed to be, a pre-tutorial introduction.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-77242748380454313202014-05-31T16:02:00.000-07:002014-06-04T22:59:42.617-07:00QAL API documentation<br />
The <span id="goog_271342709"></span><a href="http://www.optimalbpm.se/wiki/index.php/QAL">QAL<span id="goog_271342710"></span></a> API documentation <a class="external text" href="http://www.optimalbpm.se/api/qal/index.html" rel="nofollow">is now online</a> for your browsing pleasure! <br />
<br />
Python3-Sphinx was used, a small <a class="new" href="http://www.optimalbpm.se/wiki/index.php?title=Sphinx&action=edit&redlink=1" title="Sphinx (page does not exist)">article</a> on that subject may develop over time as well.<br />
<br />
<br />
Cheers! <br />
<br />
<br />
<br />Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-16267560349176003752014-05-29T16:04:00.000-07:002014-05-29T16:04:07.511-07:00Windows installer done at last!<h3>
<br />
</h3>
<div class="markdown_content">
Phew. That was actually pretty hard. The WiX and MSI documentation is both lacking and confusing.<br />
<br />
Anyway, the installer is now available for download at the <a class="" href="https://sourceforge.net/projects/optimalbpm/files/Optimal%20BPM%20tools/">files section</a> at source forge.<br />
<br />
It is burn-based, and downloads dependency packages and installs them. <br />
<br />
As I mentioned before, I made a WiX primer, I have now updated that with some information on how to install pip packages: <br />
<a href="http://www.optimalbpm.se/wiki/index.php/WiX" rel="nofollow">http://www.optimalbpm.se/wiki/index.php/WiX</a>.<br />
<br />
<br />
Cheers! <br />
</div>
Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-46957433718672619482014-05-23T16:34:00.002-07:002014-05-23T16:35:03.771-07:00WiX, .msi and a setuptools(easy install) and pip bundleHi,<br />
<br />
I am forging ahead with the making of the .msi-files for the windows installation of Optimal BPM tools (or rather Optimal Sync, to be fair).<br />
<br />
During the process of making the installation bundle(I want to include everything) i have been forced to create som .msi:s that I would think could be pretty useful outside this project, namely a setuptools and pip windows <a href="http://www.optimalbpm.se/files/">installation bundle</a>. Please download and if you encounter any problem, just create a ticket at the <a href="https://sourceforge.net/p/optimalbpm/tickets/">Sourceforge project page</a>.<br />
<br />
If course, for a Python application, these programs are very important. So now there is a way to easily include easy install and pip in an Python 3.3 windows deployment for those who want that. The installation checks for a Python 3.3 installation and installs the files in its /Scripts-folder.<br />
<br />
By the way, WiX is one of the least well documented tools i have ever encountered. At the same time it is pretty complicated. And strange. So it hasn't been easy. Far from it.<br />
<br />
However, in the end, I have managed to create a pretty flexible setup using <a href="http://wixtoolset.org/documentation/manual/v3/overview/preprocessor.html">WiX preprocessor variables</a>. Which actually works in a predictable manner as opposed to WiX Properties and normal WiXVariable constructs which might or might not be available at "bind" time.<br />
<br />
My current recommendation is to use preprocessor variables instead of the other stuff as they don't resolve in an equally predictable manner.<br />
I will elaborate on that further on my <a href="http://www.optimalbpm.se/wiki/index.php/WiX">writeup</a> later on.<br />
<br />
<br />
<br />
Until then, have a nice time.<br />
<br />
<br />Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-50605213522965755132014-05-07T12:12:00.002-07:002014-05-07T12:53:12.538-07:00Windows, MSI and fixes<h4>
Hi everybody,</h4>
<br />
The seriously bug fixed Optimal Sync(as of now at <a href="http://sourceforge.net/projects/optimalbpm/files/Optimal%20BPM%20tools/">0.8.3</a>) is about to be installable on Windows as well soon. I have started building an .msi-package, which turned out to be a bit more problematic than expected. However, I think I have overcome most of the problems.<br />
<br />
So much, actually, that I have felt compelled to poste a <a href="http://www.optimalbpm.se/wiki/index.php/WiX">guide</a> on how WiX works and its caveats. The guide is thought of as a primer before you get into the official tutorial, which is a bit wordy and difficult to get.<br />
<br />
I consider the problems I encountered quite typical of what most that develop an .msi-package encounters and that my example file is more useful. Especially, I use the paraffin.exe-utility instead of the quite insufficient heat.exe . I would imagine that most does, maybe the article can save them some time.<br />
<br />
Anyway, the guide is at the wiki, <a href="http://www.optimalbpm.se/wiki/index.php/WiX">here</a>, and I hope it will help you understand WiX and its somewhat maddening quirks a bit better.<br />
<br />
Cheers! <br />
<br />
<br />Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-4435108720127874602014-04-17T13:40:00.001-07:002014-05-06T03:57:02.257-07:00First release of the Optimal Sync application/script - 0.8! <h3 class="dark title">
</h3>
<h4 class="dark title">
Tagline(s):<br /><small>
</small>
</h4>
<h4>
</h4>
<div class="markdown_content">
What? Sales department saving their
contacts in an excel sheet? You want their edits maintained in the CRM
database? Do you regularly want to download a data set from a table in a
web page? And transform the data a little before you insert it?<br />
<h4>
It's out! </h4>
Optimal Sync is a small and simple multi-platform(well soon) BPM
utility that is able to move and affect data between different many
sources and destinations. <br />
As it is dependent on Python 3.3 and just recently released open
source software packages, it requires at least Debian "Jessie" or Ubuntu
13.04. It will however stay with stable releases after its initial
release. 0.9 will mean a tested Windows version and 1.0 OS X.<br />
That needs to be restated, this is a first release, so there are some
quirks. However the ambitions are high and there is an aggressive
release schedule ahead.<br />
Please try it out and don't hesitate to report any issues as tickets here at the project page and you'll get a response. <br />
(The GUI is "ugly"? Well, please share how to make it prettier!)<br />
Download packages at Sourceforge:<br />
<a href="https://sourceforge.net/projects/optimalbpm/files/Optimal%20BPM%20tools/">https://sourceforge.net/projects/optimalbpm/files/Optimal%20BPM%20tools/</a><br />
Install QAL before Optimal Sync:<br />
<a href="http://www.optimalbpm.se/wiki/index.php/Optimal_Sync#From%20packages" rel="nofollow">http://www.optimalbpm.se/wiki/index.php/Optimal_Sync#From%20packages</a><br />
More documentation at:<br />
<a href="http://www.optimalbpm.se/wiki/index.php/Optimal_Sync" rel="nofollow">http://www.optimalbpm.se/wiki/index.php/Optimal_Sync</a><br />
<br />
<h4>
And phew.</h4>
The above is obviously a quote from the Sourceforge blog post.<br />
So finally something actually installable and runnable has emerged from all this experimenting and fiddling around. Saying it is a cool application is perhaps not the right word, the GUI is very much leaning towards "functional" and leaning away from "aesthetic".<br />
<br />
BUT, it is all open source, and it uses the latest libraries available. In fact, so new are they, that this could not have been released a month ago. There will now be a consolidation phase where only the application and QAL will evolve as the underlying libraries goes into stabler repositories. Cool stuff will be added indeed, when all the technical hurdles has been cleared.<br />
<br />
Anyway, happiness abound!<br />
<br />
<h3>
<i>Happy Easter!</i> </h3>
</div>
Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-61429137719318013442014-04-09T05:37:00.002-07:002014-04-09T05:37:53.898-07:00<h2 style="text-align: center;">
Simple replicator</h2>
<br />
The first thing coming out of the <a href="http://www.optimalbpm.se/">Optimal BPM </a>project will be the 0.8 version of a tool called Simple Replicator. The Business Process Management angle is that transfer and transformation of data is a fundamental part of BPM. Also, the simple replicator is just that, simple. It just scratches the surface of what is possible to do with <a href="http://sourceforge.net/projects/qal/">QAL</a>.<br />
<br />
I'll begin with Debian, then Windows and Redhat(v 0.9), Mac OS X (1.0) packages, But if you just can't wait for .deb packages, you can try out the source already, it will not change much until release:<br />
git clone git://git.code.sf.net/p/optimalbpm/code optimalbpm-code<br />
<br />
<br /><br />
However, in most situations you don't have to join rows from a webpage HTML tag tree with rows from a RDBMS with those of a .csv file. So that is what "simple" means. <br />The full potential of QAL and Optimal BPM will instead be managed in a web-based interface. If you want to se examples of what QAL can do, look at the the <a href="https://sourceforge.net/p/qal/code/ci/master/tree/qal/sql/tests/">unit tests.</a> Information about manual installation is on the wiki, see below.<br />
<br />
I have already started using the ticket system at Sourceforge to track some of the development, it works ok for that purpose, however, and of course, it is not near what bugzilla and other bugtrackers can provide, but will suffice for now. You can view and add tickets <a href="http://sourceforge.net/p/optimalbpm/tickets/?source=navbar">here</a>. All questions are welcome. <br />
<br />
There is also the beginning of a <a href="http://www.optimalbpm.se/wiki/index.php/Main_Page">wiki</a> brewing. So far there is only an article about <a href="http://www.optimalbpm.se/wiki/index.php/QAL.DAL">QAL's database abstraction layer</a> and the Optimal BPM tool <a href="http://www.optimalbpm.se/wiki/index.php/Simple_Replicator">Simple Replicator</a><br />
<br />
Cheers!<br />
<br />
<br />Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-58914693772227408322013-12-25T16:22:00.001-08:002013-12-25T16:22:08.193-08:00New name, Optimal BPM. <div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<b><span style="font-size: x-large;">Name change</span></b> and <b><span style="font-size: large;">GoDaddy</span> <span style="font-family: "Helvetica Neue",Arial,Helvetica,sans-serif;"><span style="font-size: large;"><u><i>anger</i></u><i> </i><u><i> </i></u></span></span></b></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
<b>Unified BPM has changed its name to Optimal BPM. </b></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Why? Because it is free, plenty better and more catchy: Optimal BPM - The beat of your business". :-) </div>
<div style="text-align: left;">
<br />Also I got a not very good feeling concerning the future of Unified BPM, name-conflict-wise. However, QAL is still QAL and Unified Back End is still Unified Back End. (Because I still like that name and OBE was something else).</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The Sourceforge project will change its unix name shortly.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
A web site and wiki will surface instead of the placeholder at optimalbpm.org* soon. <br /></div>
<div style="text-align: left;">
<b><br />GoDaddy is strange and does business like no other.</b></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
I tried to get the unifiedbpm.com and .org address and turned to the GoDaddy buy services. For $69 bucks they'll contact the owner for you and does your bidding. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
However, even though the seller was claimed to be interested in selling, the seller never got back after I made my first bid. Of course, they've could of considered it too low or something, but they didn't get back with a counter offer.</div>
<div style="text-align: left;">
Not even saying that is way too low. And now stuff starts to get weird.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
I asked to get copies of what they had sent to the potential seller, I didn't need to know who it was, just what had been said.</div>
<div style="text-align: left;">
Then they told me that they to give out "internal correspondence" unless they get a subpoena, which he recommended me to get! Holy bejesus, a subpoena?<br /><br />The craziest about the entire thing is that:<br /><br />1. There is no proof *whatsoever*, they haven't even bothered to fake some, that says they have even done *anything*.<br /></div>
<div style="text-align: left;">
2. The owner of the domain according to whois, is GoDaddy themselves. This means that there could very well be no counterparty at all. GoDaddy might actually bid with themselves! Like stock brokers used to clear in-house but still charge fees. But almost worse!</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
It is quite extraordinary that *anyone* would use such a service. Had I known, I would never considered it.<br /><br /><br /><br />* Yes, I nicked the .com and .se as well. <br /> .SE? Please, I am a swede after all...</div>
Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-85986276307715247972013-08-15T05:05:00.001-07:002013-08-15T05:09:42.314-07:00Back on track/Lots of changesI have decided to split the project into three different projects.<br />
<br />
From the bottom up, it will look like this:<br />
<br />
1. QAL wants to try its own wings and have moved out, to https://sourceforge.net/projects/qal/ .<br />
<br />
The query abstraction layer is now a separate software package and I have made it available throug PyPI. At least python 3 is needed, install using<br />
<span style="font-family: "Courier New",Courier,monospace;">sudo pip-3.X install qal</span><br />
<br />
<br />
2. Unified Back End, UBE, at https://sourceforge.net/projects/ube/ , has been created. This is simply a matter of the Unified BPM node tree and EAV-model moving out. This because it's access layers and its database are a very generic designs that could be used in many software projects. It is how I, after 15 years of business database and software design want to persist my data. QAL is a dependency. As with QAL, it will soon be available through PyPI. As with QAL, at least python 3 is needed, install using <br />
<span style="font-family: "Courier New",Courier,monospace;">sudo pip-3.X install unified_back_end</span><br />
<br />
<br />
3. Unified BPM. What is left is the system itself, which currently isn't all that much as most of its functionality is in the back end layers and prototype code.<br />
<br />
3.1 Unified BPM Tools is not a part of the Unified BPM system, but part of the project. Soon, a scriptable data moving utility will turn up there.<br />
<br />
<br />
<br />Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-75721376039722487822012-11-05T10:26:00.002-08:002012-11-05T10:31:21.431-08:00AOP, yeah you know me. And some postgresql/windows context issues.<h2>
AOP</h2>
Unified BPM vill make heavy use of Aspect Oriented Programming.<br />
Why? Because if there ever was a system that would benefit from centralizing cross-cutting concerns, it is this one.<br />
There is basically none of the typical uses for AOP that aren't present everywhere in Unified BPM: Security, Logging, Caching(memoization), Syncronization, Connection management, Proxification, Run-time debugging, tracing, statistics and basically whatever you could think of, they are all present here.<br />
<br />
Also, the support for it in Python is positively excellent and simple to use. Especially using the "decorator" module. <br />
<br />
<br />
<h2>
Postgresql-client vs. Windows USERNAME</h2>
So, suddenly Hudson started reporting that the Win XP-postgresql tests failed.<br />
I also realized that they only failed while being remotely run by the Hudson test runner. <br />
I just could not get my head around it, after a while I managed to find out it was getpass that tried to load the pwd-module. Which of course doesn't exist in the Windows environment.<br />
I had no idea why, though. It really had no use for that data. The only reason was that it used it to populate some defaults when it couldn't find the USERNAME environmental variable.<br />
The thing was, that that variable was available when I ran it as the same user from a logged in shell. But it seemed it wasn't when being run by Hudson.<br />
<br />
It took me a while to come to the conclusion that my only option was to manually set the USERNAME variable as a parameter in Hudson. I did, and it worked.<br />
<br />
It seems that the USERNAME environment variable isn't set by Windows, but by the process creator. Or something to that effect. <br />
<br />
<br />
<h2>
</h2>
<h2>
</h2>
Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-64421903354461097392012-09-02T14:27:00.001-07:002012-09-02T14:31:05.618-07:00Database editor and structureHi,<br />
<br />
Some new stuff, have not had much time the last year. Most importantly though, the genesis of the UBPM database structure builds on all platforms(postgres, mySQL, SQL server, DB2 and Oracle).<br />
<br />
<h3>
<b>Editor for XML SQL structures:</b></h3>
<h3>
</h3>
So first, I have created a web based editor for the XML SQL structures. Behold.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNWjeR8ek53SCoatPLotgLSJLub6Puil3MPCXfRhB1ZbC31_xyxIXIoAjJr7kp9VzqY_VDoGXaC2p5M9Icmh09c8302uN4mKFKMIcaymdTEfMiw8tQIQeAIEJVhnYpwCmrGz0dR_GQHuM/s1600/UBPM_editor.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNWjeR8ek53SCoatPLotgLSJLub6Puil3MPCXfRhB1ZbC31_xyxIXIoAjJr7kp9VzqY_VDoGXaC2p5M9Icmh09c8302uN4mKFKMIcaymdTEfMiw8tQIQeAIEJVhnYpwCmrGz0dR_GQHuM/s400/UBPM_editor.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
It resides in the Q_N_D-folder, which means that it is quick and dirty, something built to be able to move forward. However, it had to have some..<br />
<br />
<b>Features</b>:<br />
<ul>
<li>Ability to store(and load) the resulting data files centrally(preferrably within the source tree).</li>
<li>Functionality to build upgrade files for the upgrades.</li>
<li>It had to be easy to also make stand-alone statements.</li>
</ul>
Just run controller.py and open up http://127.0.0.1:8080 to watch it in action.<br />
It uses cherryPy for the backend and Genshi for templating, a really nice and lightweight framework indeed. <br />
<br />
<br />
<h3>
The database</h3>
<br />
Using that, I have finally stumbled into what for me is far more familiar ground, database design.<br />
I have created an upgrade script that creates the first parts of the Unified BPM database structure(stuff like a tree, rights, users, languages).<br />
<br />
<br />
The database will store the persistent structural data in a central tree.<br />
Its meta data will be stored in one of three different ways, depending on what suits the specific entity considering simplicity, flexibility and performance.<br />
<br />
<h4>
<b>The data models:</b></h4>
<br />
<ul>
<li>Relational:<br />This is the usual way people store data, with sub tables detailing the nodes in the tree.</li>
<li>EAV(Entity-Attribute-Value):<br />Instead of having many subtables, everything is stored in one table. If one doesn't want to use variants, one uses one table per data type. In reality, however, that doesn't matter as much, as database engines nowadays store data in a far more intelligent manner, so it could actually be SLOWER to NOT use variants. How about that, strongly typing advocates?</li>
<li>XML:<br />Actually similar to to EAV in many modern database implementations, where XML is its own data type which can be queried just like a table structure. However, UBPM will not use any these implementations however cool they are. UBPM will only store the XML as strings.<br />The reason is that 1) all those implementations are proprietary and would be hell to generalize and 2) that it is not the point. XML data will be used to allow for freely structured storage...that is somewhat structured. If you know what I mean. An example would be a 3rd party kind of data. </li>
</ul>
<br />
<h3>
In conclusion </h3>
<br />
It now seems that most of the surprisingly large technical hurdles in dealing with multiple database platforms has been negotiated. Now it is just a matter of expanding the support for more of the SQL standard. But the next in line, like ALTER TABLE and UPDATE are quite easy and contains the same elements as the already implemented.<br />
Also, my self-educational goals up till now has been met as well, I have learned quite a lot.<br />
From here on, if feels like it should be a far more straight forward matter of comparatively simply building the system. I suspect the multi-platform deployment will create problems, however I will simply not try to generalize between windows and linux installations. It will basically be completely different solutions.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-73226737467746256862011-11-23T14:30:00.000-08:002011-11-23T15:56:36.272-08:00Success!Hi everybody!<br /><br />Now I have finally succeeded in making the following work:<br /><ul><br /><li> XML to object structure</li><br /><li> Object structure to XML (the tests goes both ways)</li><br /><br /><li> The following structures are currently supported:<br /><ul><br /><li>SELECT<br /><ul><br /><li>Operators/Identifiers/Aliases</li><li>Function calls</li><li>TOP/LIMIT</li><li>CASE..WHEN</li><li>JOIN</li><li>WHERE</li><li>ORDER BY</li><br /></ul><br /></li><br /><li>INSERT INTO (from SELECT)</li><br /><li>CREATE TABLE<br /><ul><br /><li>Constraints(PK/FK)</li><li>Indexes</li><li>Default values</li><li>Auto increment. On Oracle using triggers, thanks for that pointless inconvenience, Oracle :-(</li><br /></ul><br /></li><br /><li>Custom<br /><ul><br /><li>The "custom" structure simply take an SQL statement for each backend.<br /><br />It is intended for those cases when very database-specific features are needed.</li><br /></ul></li></ul></li><br /><li>The db_upgrader library, that uses the DAL to upgrade databases, outside of performing the upgrades from an XML-definition, now tracks applications and versions. Cewl stuff.<br /></ul><br /><br />Basically, those three statements are enough to build most things.<br />There is usually no long-term gain into making a database much more complex than so.<br />I'd say you solve 95 % of your database creation needs with that, and what it does not solve..well perhaps it should not be in the database layer.<br /><br /><br /><br />These structures are all tested as integration tests that are run on both Windows XP and Debian(unstable) that, using the db_upgrader library creates an example database on the following platforms:<br /><ul><br /><li>Postgresql(on Debian stable)</li><br /><li>MySQL(on Debian stable)</li><br /><li>MSSQL server Express(on Windows XP)</li><br /><li>IBM DB2 Express-C (on Debian stable)</li><br /><li>Oracle 11g XE (on Fedora 16)</li><br /><br /></ul><br /><br />I can say one thing, it sure wasn't <span style="font-style: italic; font-weight: bold;">easy</span>.<br /><br />I basically had these requirements:<br /><ol><li>The Database Abstraction Layer(DAL) should be able to run on both Windows and Linux and seamlessly be able to do the same thing on PostgreSQL, MySQL, MSSQL, DB2 and Oracle.<br /></li><li>The installation of the clients must be simple(server:port, database, user, password) , and therefore any installation procedures need to be scriptable. A user should not need to edit a million config files, set environment variables or download lots of files to make it work.</li><li>The connection to the database backends should not need lots of tweaking, it should "just work". I did not mind if some performance had to be sacrificed. According to my experience after 10 years of database development, it is usually not the driver that is the show stopping performance bottleneck, but indexing, lack of proper database engine internals and, of course bad database design.<br /></li></ol>I almost met all of them. The only one were I didn't make it all the way was point 2.<br />And actually, I think I can semi-automate or at least wildly simplify those processes if the worst client-side hassle offenders, IBM DB2 and Oracle 11g, is OK with it.<br /><br />Configuring the back end servers was a real pain though, especially Oracle 11g.<br />Don't try install that on Debian/Ubuntu, folks. Go with Fedora for Oracle 11g and you'll live longer.<br />Luckily, I am done with that now. :-)<br /><br />On to moving the whole thing stuff into public display on the Sourceforge GIT. Or perhaps somewhere else, Google code? Ideas?<br /><br />Anyway, I will now set up some CI-testing solution around my servers and clients. That'd be great.<br /><br />And then it is on to some more installation(.deb/.msi files) stuff and then the fun part. Adding features.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-18168848687383813642011-09-18T12:54:00.000-07:002011-09-18T15:06:38.012-07:00XML to structure and back again.Hi again.<br /><br />After a year long pause, I have started with this again.<br />And so I now have my first test of the XML-to-SQL objects-to-XML.<br />And it works to both parse and generate it back from the object structure and getting a file identical to the first. <br /><br />The XML looks like this:<br /><blockquote><br /><pre><?xml version="1.0" ?><br /><sql:Statement xmlns:sql="http://www.unifiedbpm.se/XMLschema/DAL/SQL"<br />xmlns:xsi="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.unifiedbpm.se/XMLschema/DAL/SQL ../SQL.xsd"><br /><sql:Verb_CREATE_TABLE><br /><sql:Name>Table1</sql:Name><br /><sql:Constraints><br /><sql:Parameter_Constraint><br /><sql:CheckConditions /><br /><sql:Constraint_Type>PRIMARY KEY</sql:Constraint_Type><br /><sql:References><br /> <sql:str>Table1ID</sql:str><br /></sql:References><br /><sql:Name>PK_Table1_Table1ID</sql:Name><br /></sql:Parameter_Constraint><br /><sql:Parameter_Constraint><br /><sql:CheckConditions /><br /><sql:Constraint_Type>FOREIGN KEY</sql:Constraint_Type><br /><sql:References><br /> <sql:str>Table1ID</sql:str><br /> <sql:str>Table2</sql:str><br /> <sql:str>Table2ID</sql:str><br /></sql:References><br /><sql:Name>FK_Table1_Table1ID_Table2_Table2ID</sql:Name><br /></sql:Parameter_Constraint><br /><sql:Parameter_Constraint><br /><sql:CheckConditions><br /> <sql:Parameter_Conditions><br /> <sql:Conditions><br /> <sql:Parameter_Condition><br /> <sql:Operator>></sql:Operator><br /> <sql:ANDOR>AND</sql:ANDOR><br /> <sql:Right><br /> <sql:Parameter_Numeric><br /> <sql:Numeric_Value>2.4</sql:Numeric_Value><br /> </sql:Parameter_Numeric><br /> </sql:Right><br /> <sql:Left><br /> <sql:Parameter_Numeric><br /> <sql:Numeric_Value>1.3</sql:Numeric_Value><br /> </sql:Parameter_Numeric><br /> </sql:Left><br /> </sql:Parameter_Condition><br /> <sql:Parameter_Condition><br /> <sql:Operator>LIKE</sql:Operator><br /> <sql:ANDOR>AND</sql:ANDOR><br /> <sql:Right><br /> <sql:Parameter_String><br /> <sql:String_Value>%</sql:String_Value><br /> </sql:Parameter_String><br /> </sql:Right><br /> <sql:Left><br /> <sql:Parameter_Identifier><br /> <sql:Identifier>T1.firstname</sql:Identifier><br /> </sql:Parameter_Identifier><br /> </sql:Left><br /> </sql:Parameter_Condition><br /> </sql:Conditions><br /> </sql:Parameter_Conditions><br /></sql:CheckConditions><br /><sql:Constraint_Type>CHECK</sql:Constraint_Type><br /><sql:References><br /> <sql:str>Table1ID</sql:str><br /></sql:References><br /><sql:Name>CK_Table1_Name</sql:Name><br /></sql:Parameter_Constraint><br /><sql:Parameter_Constraint><br /><sql:CheckConditions /><br /><sql:Constraint_Type>UNIQUE</sql:Constraint_Type><br /><sql:References><br /> <sql:str>Table1ID</sql:str><br /></sql:References><br /><sql:Name>UQ_Table1_Name</sql:Name><br /></sql:Parameter_Constraint><br /></sql:Constraints><br /><sql:Columns><br /><sql:Parameter_ColumnDefinition><br /><sql:Datatype>serial</sql:Datatype><br /><sql:Default /><br /><sql:NOTNULL>True</sql:NOTNULL><br /><sql:Name>Table1ID</sql:Name><br /></sql:Parameter_ColumnDefinition><br /><sql:Parameter_ColumnDefinition><br /><sql:Datatype>string(400)</sql:Datatype><br /><sql:Default /><br /><sql:NOTNULL>False</sql:NOTNULL><br /><sql:Name>Table1Name</sql:Name><br /></sql:Parameter_ColumnDefinition><br /><sql:Parameter_ColumnDefinition><br /><sql:Datatype>timestamp</sql:Datatype><br /><sql:Default /><br /><sql:NOTNULL>False</sql:NOTNULL><br /><sql:Name>Table1Changed</sql:Name><br /></sql:Parameter_ColumnDefinition><br /></sql:Columns><br /><sql:postSQL /><br /></sql:Verb_CREATE_TABLE><br /></sql:Statement><br /></pre><br /></blockquote><br /><br />As you see, it is basically an abstraction of the SQL language.<br />It should be pretty easy to write a GUI for generating the XML that would be similar to actually writing SQL.<br />With that as input, the following SQLs are generated when calling any of the SQL objects' AsSQL-method(note the oracle syntax):<br /><br /><br />SQL for the MySQL database:<br /><code><br />CREATE TABLE Table1 (<br />`Table1ID` INTEGER AUTO_INCREMENT NOT NULL,<br />`Table1Name` VARCHAR(400),<br />`Table1Changed` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,<br />CONSTRAINT `PK_Table1_Table1ID` PRIMARY KEY (Table1ID),<br />CONSTRAINT `FK_Table1_Table1ID_Table2_Table2ID` FOREIGN KEY (Table1ID) REFERENCES Table2(Table2ID),<br />CONSTRAINT `CK_Table1_Name` CHECK ((1.3 > 2.4) AND (T1.firstname LIKE '%ercedes')),<br />CONSTRAINT `UQ_Table1_Name` UNIQUE (Table1ID)<br />)<br /></code><br /><br />SQL for the PostgreSQL database:<br /><br /><br /><code><br />CREATE TABLE Table1 (<br />"Table1ID" serial NOT NULL,<br />"Table1Name" VARCHAR(400),<br />"Table1Changed" DATETIME DEFAULT(NOW()),<br />CONSTRAINT "PK_Table1_Table1ID" PRIMARY KEY (Table1ID),<br />CONSTRAINT "FK_Table1_Table1ID_Table2_Table2ID" FOREIGN KEY (Table1ID) REFERENCES Table2(Table2ID),<br />CONSTRAINT "CK_Table1_Name" CHECK ((1.3 > 2.4) AND (T1.firstname ILIKE '%</code><code>ercedes</code><code>')),<br />CONSTRAINT "UQ_Table1_Name" UNIQUE (Table1ID)<br />)<br /></code><br /><br />SQL for the Oracle database:<br /><br /><br /><code><br />CREATE TABLE Table1 (<br />"Table1ID" integer NOT NULL,<br />"Table1Name" VARCHAR2(400),<br />"Table1Changed" DATETIME DEFAULT(NOW()),<br />CONSTRAINT "PK_Table1_Table1ID" PRIMARY KEY (Table1ID),<br />CONSTRAINT "FK_Table1_Table1ID_Table2_Table2ID" FOREIGN KEY (Table1ID) REFERENCES Table2(Table2ID),<br />CONSTRAINT "CK_Table1_Name" CHECK ((1.3 > 2.4) AND (T1.firstname LIKE '%</code><code>ercedes</code><code>' escape 'C')),<br />CONSTRAINT "UQ_Table1_Name" UNIQUE (Table1ID)<br />)<br />CREATE SEQUENCE seq_Table1_Table1ID_DAL_auto_increment<br />start with 1<br />increment by 1<br />nomaxvalue;<br />CREATE TRIGGER tr_Table1_Table1ID_DAL_auto_increment<br />BEFORE INSERT ON Table1 FOR EACH ROW BEGIN<br />SELECT seq_Table1_Table1ID_DAL_auto_increment.nextval INTO :new.id FROM dual;<br />end;<br /></code><br /><br />SQL for the DB2 database:<br /><br /><br /><code><br />CREATE TABLE Table1 (<br />"Table1ID" INT GENERATED ALWAYS AS IDENTITY NOT NULL,<br />"Table1Name" VARCHAR(400),<br />"Table1Changed" TIMESTAMP,<br />CONSTRAINT "PK_Table1_Table1ID" PRIMARY KEY (Table1ID),<br />CONSTRAINT "FK_Table1_Table1ID_Table2_Table2ID" FOREIGN KEY (Table1ID) REFERENCES Table2(Table2ID),<br />CONSTRAINT "CK_Table1_Name" CHECK ((1.3 > 2.4) AND (T1.firstname LIKE '%</code><code>ercedes</code><code>')),<br />CONSTRAINT "UQ_Table1_Name" UNIQUE (Table1ID)<br />)<br /></code><br /><br />SQL for the SQLserver database:<br /><br /><br /><code><br />CREATE TABLE Table1 (<br />[Table1ID] int IDENTITY(1,1) NOT NULL,<br />[Table1Name] varchar(400),<br />[Table1Changed] DATETIME DEFAULT(NOW()),<br />CONSTRAINT [PK_Table1_Table1ID] PRIMARY KEY (Table1ID),<br />CONSTRAINT [FK_Table1_Table1ID_Table2_Table2ID] FOREIGN KEY (Table1ID) REFERENCES Table2(Table2ID),<br />CONSTRAINT [CK_Table1_Name] CHECK ((1.3 > 2.4) AND (T1.firstname LIKE '%</code><code>ercedes</code><code>')),<br />CONSTRAINT [UQ_Table1_Name] UNIQUE (Table1ID)<br />)<br /></code><br /><br />Small disclaimer: I haven't run these scripts on all database architectures, am currently rebuilding the testing environment.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-54726255802729469692010-10-26T14:59:00.000-07:002010-10-27T12:11:01.470-07:00XML, back and forth and here again.Hi,<br />Lately i've busy creating, validating and parsing XML.<br /><br />There are now two schemas used when defining upgrade files:<br />http://www.unifiedbpm.se/XMLschema/DAL/SQL.xsd<br />http://www.unifiedbpm.se/XMLschema/DBUpgrader/DBUpgrader.xsd<br /><br />The first defines the, so far supported, SQL, and the second defines the DBUpgraders upgrade file.<br /><br />As always, I had less time and it took more effort than I thought, but now I am closing in on having a working upgrader.<br /><br />It is pretty cool to watch integration tests run that creates the same structure on five different database platforms, inserts data and selects back and actually works.<br />A bit left to do, but after that, this might actually be a useful functionality.<br /><br />That's all folks, bye!Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-2995502834485108292010-09-29T23:48:00.000-07:002010-10-02T14:13:17.739-07:00Road map.Hi,<br /><br /><span style="font-size:130%;">SO FAR</span><br /><br />I've gotten to a point where I have proven and implemented the two most important technical concepts needed for the development of Unified BPM :<br />1. Database abstraction layer<br />2. Database upgrade management including an (almost finished) <a href="http://www.unifiedbpm.se/XMLschema/DAL/SQL.xsd">XML schema</a> for upgrade files. The XML schema is to allow for easy upgrade client development, one should not have to dabble with XML file editing. Interesting point is that the code generates the schema mostly by itself, based on its functionality, allowing for simpler maintenance.<br /><br />This functionality extends to the MySQL, PostgreSQL, DB2, Oracle and SQL server platforms, except for upgrade management, where Oracle is partially unsupported due to its aforementioned lack of auto increment shortcuts. Maybe someone knows a way around that problem?<br /><span style="font-size:130%;"><br />ROAD MAP</span><br /><br />At this stage, then, I am in the really exciting position of being able to present a short road map. It is a seriously pragmatic one, one that is geared at getting something basic working as soon as possible:<br /><br />1. Basic database abstraction/upgrade management. Creating tables, selecting and inserting data. I won't do anything more advanced with this unless other parts of UBPM needs more functionality. In my experience and according to my expectation, needing advanced SQL statements in the beginning of a project is mostly due to poor database design. Also, I am consciously keeping the database as simple as possible at this stage. I am nearly done with this stage. I have some structural stuff left on the database upgrader, but then I will move on.<br /><br />2. The next step will be the messaging system of UBPM. I will begin with deciding the basic structure of communication within UBPM. I have already figured most of it out, but now I need to scale that down to its bare necessities in order to start somewhere.<br />Note that no logging of communications will be implemented in this stage.<br />The test for this will be to send a file from a listener on one host to another. One listener will run on a Linux host, and the other on a Windows host.<br /><br />3. Not only files are to be sent, so next up will be to SELECT a dataset on one database server of one type, and INSERT the same data on an other of a different type and compare. This will put the DAL through its paces and probably expose any problems there.<br /><br />4. By now, the DAL has probably been tested enough to enable me to start working on the Unified BPM database structure. Its primary purpose will be to log whatever happens on the system and that will be what I'll be implementing first. I will test this, and the database upgrade process by installing and upgrading the database and running the tests of points 2 and 3.<br />At this stage, I will be very happy, since I would now know that all the important concepts of UBPM works. As I see it, the largest technical hurdles are now negotiated and I will have a working integration test environment.<br /><br /><br />5. GUI. I start to develop a simple web based GUI that enables the generation of the XML database upgrade files. Because, from now on, there will be significantly more database development.<br />In general, it will be more straightforward development, and perhaps the perfect moment to invite others to join the project. It will certainly still be small enough to be easily influenced by new ideas and feedback.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-31208570384985664842010-09-14T01:43:00.000-07:002010-09-14T05:30:12.685-07:00Connection(s) established. Initiating execution.Hi y'all.<br /><br />So now the next phase is finished.<br />I now implemented and tested connections to all supported database platforms.<br />I only used multi-platform python-components.<br />These are:<br /><ul><li>PostgreSQL(psycopg2, native)</li><li>MySQL(MySQLdb, native)<br /></li><li>SQL Server(pyodbc using FreeTDS)</li><li>DB2(pyodbc, needs libdb2.so)</li><li>Oracle(cx_Oracle, native, needs oracle instant client)<br /></li></ul>Getting all the databases running on my network was a real chore, to be honest.<br />Especially DB2 and Oracle were quite cumbersome to connect to, and needed lots of proprietary stuff installed on the client. Also, they don't really support debian/ubuntu.<br />However, now that I know how everything works I will try to automate client installations, or at least document them well, so Unified BPM users won't have to go through all the hassle I had to.<br /><br />Anyway, next now is to define a small XML schema for generalizing of database upgrade management.<br />My thought is that the same XML should be able to manage database upgrades for multiple platforms, through the functionality of the database abstraction layer.<br />I don't want some Unified BPM components to end up having half-a$$ed support for specific databases, like it is for the Drupal project, for example.<br />Also, this makes it possible to make really kewl database tests.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-16418788868551029402010-08-17T02:37:00.000-07:002010-08-17T02:57:17.143-07:00Done with first phase of SQL generationSo there.<br /><br />Now I have, in python, created a class which does only the things I need to at begin with the first database part of Unified BPM, creating and querying simple database structures.<br />The class can currently create these kinds of SQL statements(and then some):<br /><ul><li>SELECT - with where, expressions, conditions, CASE, function calls, joins, subselects.</li><li>CREATE TABLE - with defaults, primary keys, foreign keys and autoincrement<br /></li><li>CREATE INDEX - Unique, Clustered, Non-Clustered</li></ul>Supported databases for this functionality is:<br /><ul><li>Full<ul><li>MySQL</li><li>PostgreSQL</li><li>DB2</li><li>SQL Server</li></ul></li><li>Partial DDL<ul><li>Oracle(due to identifier limitations and auto-increment hassle)</li></ul></li></ul>Now I will use this generator to make test for the DAL:s connection handling.<br />I now have working installations of all five databaseservers and will use them to set up integration tests for all flavours.<br /><br />In practice, Partial DDL means that Unified BPM won't run on that platform but will be able to communicate with it. I simply won't commit to restricting myself to 30 bytes long identifiers. Somebody please think of the FK-names!Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-71599348726824546902010-08-15T03:14:00.000-07:002010-08-15T03:20:12.223-07:00Adding DB2 supportWell, I though that while I'm at it, I might as well add DB2 support as well.<br />So now I will, it seems to have a pretty much straightforward SQL implementation.<br /><br />By the way, It is kind of interesting to see what free(as in beer) offerings the different database vendors have, it is a bit revealing with regards to their attitude towards developers.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com2tag:blogger.com,1999:blog-5424108735827007207.post-45054270410159784602010-08-14T01:58:00.000-07:002010-08-15T00:53:45.464-07:00Dropping oracle support(at least DDL)Hi,<br />I have decided to drop Oracle support.<br />Last time I used Oracle was many years ago and back then I had none of the experience I do today.<br />The reason? The extremely silly 30-character length restriction on identifiers.<br /><br />Oracle restricts the length of identifiers to 30 characters, which is extremely strange, this being 2010 and all.<br />Large systems with lots of complex and specific data needs long identifier names. I once belonged to the school of abbreviation, where everything got abbreviated down to a length of about 20 characters. I have since met THE REAL WORLD of huge databases and realised that the only reason to abbreviate identifiers is that is looks better when you list them.<br />However, mapping them to their real-world counterpart data(rw_ctrprt_data?!?) will, for the next developer, be a error-prone and boring chore.<br /><br />Recently, a collegue of mine came to me with an identifier which unabbreviated would become something like 120 characters long. I had to tell him to go with it, since there was no way around it.<br /><br />The typical reason for an identifier to become this long is that it describes an exception of some sort, like: "add taxes if the counterparty's custodian has the same currency than the traders custodian".<br />Usually, it is some real-word legal rule that has to be followed.<br />And believe me, we use all the tricks, like dynamic structures and so forth, to avoid having to name columns like this, so don't give me any crap of "with the right design", because reality is different from school.<br />But many times, one have to do it that way. Yeah, one could use column description, but they aren't as useful as one might think.<br /><br />Anyway, I will now drop Oracle support. Too bad I've already spent hours on making the <a href="http://unifiedbpm.blogspot.com/2010/07/please-someone-tell-me-i-am-wrong.html">auto-increment support</a>.<br />Luckily, it was really straightforward to uninstall the Oracle software, kudos to them. :-)<br /><br />EDIT:<br />Upon closer thought I will probably not drop Oracle support totally, but rather I will only have none or partial DDL support. It would be silly to exclude a very popular database from the databases supported by UnifiedBPM.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-41275172993926742202010-07-25T03:11:00.000-07:002010-07-25T12:22:54.125-07:00More frustration and extremism.Ok. In my last post I ranted about ranting about limitations in a specific database.<br />But sometimes, developers don't give a sh*t on how the users actually want to use the system they make.<br />Rather, they go to extreme lengths explaining why the user should do it another way instead.<br />Even though the users obviously don't want to do it that way.<br /><br />For example, I am implementing column defaults in the DAL, and have finall gotten do make it work for MySQL.<br />What is the most usual default value? To me, and in all databases I have seen so far it is the current date and time.<br /><br />I haven't thought about it much until now, since I had seen the TIMESTAMP datatype being used for that in MySQL.<br /><br />I did not, however, consider that DEFAULT (Now()) wasn't even supported, since functions aren't allowed as default values!<br />So I will have to use another datatype, the afore mentioned timestamp, which I will transform into DATETIME DEFAULT (GETDATE()) when needed.<br />However, timestamp is completely inflexible, and have a different start date than the normal date time. So, it sucks and probably has to be handled separately later on.<br />But, screw everyting, I'll use it for now.<br /><br />See <a href="http://bugs.mysql.com/bug.php?id=27645">http://bugs.mysql.com/bug.php?id=27645</a> for the reeaally long discussion.<br /><br />Since many FOSS-systems, like mine, puts huge efforts in being database-agnostic, it is extremely annoying that the databases don't try meeting up the other way.<br />Like the stupid reasoning in the Oracle camp for not shortcutting auto incrementation for the condescending reason of teaching users how to do things properly.<br /><br />The only database that seems to be fairly balanced, and doesn't create any significant problems for me, is PostgreSQL.<br /><br />Now it seems that I am letting MSSQL off easy. But of course, MSSQL has the dubious advantage of almost not being designed at all, thus allowing for practically everything thrown on it at this the most basic level of database development.<br />It's when it gets advanced that MSSQL comes back and bites you like you've never been bitten with it's partial implementations. The insane PARTITION BY limitations almost killed me, for example.<br />That said, on this level it works well.<br /><br />Cy'all.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-51158288560326982752010-07-20T15:14:00.000-07:002010-07-20T15:56:29.151-07:00Hacking along, realising stuff.I am doing a little more than is necessary on that DAL(database abstraction layer), because I have discovered it being pretty rewarding.<br /><br />I look at myself as something of a pretty accomplished SQL dude, knowing more than perhaps sometimes is healthy about databases, but this has made me reflect on the structure of the SQL language. A refreshing angle for me, to say the least. Sometimes you do something so much that you just end up muttering about it shortcomings. Especially different implementations' shortcomings.<br /><br />It's interesting to realise what really belong where, it was way harder than I thought to break it down in its logical pieces.<br />Its small things, like that a join is the same as a complete query, just a source of data treated by the query:<br /><pre><br />SELECT N.Name, A.Role<br />FROM Names N -- Source 1 data<br />JOIN Actors A ON A.PersonID = N.PersonID -- Source 2 data+conditions<br />WHERE N.Name = 'Clint Eastwood' -- Source 1 conditions<br /></pre>Being able to treat these sources in exactly the same way makes things way easier. Just an example. And Conditions are made of expressions that can have conditions in them, would be another example. Definitions, definitions.<br />There are more stuff like that in SQL-DML, but they are a bit harder to explain, so I won't.<br /><br />Anyway, I solved the <a href="http://unifiedbpm.blogspot.com/2010/07/please-someone-tell-me-i-am-wrong.html">autoincrement issue</a> with Oracle by having a postSQL property, simply generating code for creating the sequence and the trigger and then concatenating that to the end of the CREATE TABLE-SQL. Haven't tested that code very thoroughly yet though.<br /><br />Another reason to stay with DAL a bit more is that I am thinking of perhaps creating a transformation toolchain for use in Unified BPM. Or at least be used for more than database upgrades.<br /><br />Well. See ye later.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0tag:blogger.com,1999:blog-5424108735827007207.post-91062741055646576272010-07-03T13:36:00.001-07:002010-07-03T14:19:30.246-07:00Please someone tell me I am wrong.Here I was, happily coding along, when I got into something that possibly will force me to drop Oracle support: Oracle's implementation of sequences.<br /><br />By not having built-in insert-support of auto-incrementing sequences like others do (PG,MY, SQL-se et.al) they leave the incrementing of primary key to the insert statement using seqname.nextval.<br /><br />Yes, this could be made using a trigger. But to me, triggers are teh ugly shit, in general.<br />Having one in almost every table really stinks. It is bad form. Added a column? Tough luck, you're likely to have some trigger that needs that column added to.<br /><br />I have seen the rationale for this, that it is better to have sequences table independent.<br />But that's missing the point. It's not about that, everybody else has that too.<br />It's about <span style="font-weight: bold;">control</span> and should be the <span style="font-weight: bold;">OTHER WAY.</span><br /><br />The sequencing should always happen automatically in those tables using them when records are inserted.<br />It should not be possible for an application to insert a custom value in a sequenced column. It should not be possible to, by accident, choose the wrong sequence.<br /><br />This aspect even makes it dangerous.<br />Actually, most developers just use sequences to have a new value for each insert, not caring very much if there is a gap. The only time they get in trouble for using the wrong sequence is when they try to insert a value that already exists. Usually, that problem is not even a problem and almost immediately visible.<br /><br />But in financial applications, for example, having gaps the sequence could have serious legal ramifications, possibly not seen until some reconciliation event occurs years later. And by then it have created situations that are almost impossible to rectify. And, even more importantly, explain, to the SEC or IRS when they come knocking. Where did you put those missing futures contracts now again? Where is that coupon? Hopefully someone will notice. But likely they won't since they are many working with the same sequence.<br /><br />So to provide sequencing support for Oracle, the INSERT commands need to know lots of things about the table. And since I want to abstract the creation, I need to keep track of this weirdness.<br /><br />I am thinking about automatically creating an very specifically named trigger but it seems so stupid and out of place. It really does not fit well with how DAL is supposed to work.<br /><br />Aww crap.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com1tag:blogger.com,1999:blog-5424108735827007207.post-85475177709007734242010-06-13T13:18:00.000-07:002010-06-14T05:17:29.871-07:00Database Abstraction Layer - DALHi!<br /><br />The last month has been somewhat difficult with regards to time available.<br />But I've still managed to churn out an early version of what will be the core of the database abstraction layer, the classes that generates the SQL for each database.<br />The code is currently supporting MySQL, PostgreSQL, Oracle and SQL Server.<br />Since they will be sent as parameters into DAL, I call them parameters.<br />They do, however, have more functionality than being mere parameters.<br />There is a base class, Parameter_Base, that all parameters inherits from, that has four methods:<br /><ul><li>AsMySQL()</li><li>AsPostGreSQL()</li><li>AsOracle()</li><li>AsSQLServer()</li></ul>Each of these methods returns a database specific SQL-representation of the parameters contents.<br />There are classes like Parameter_Field, Parameter_Expression, Parameter_CASE, Parameter_Function and Parameter_SELECT and so on.<br /><br />They use each other to generate, for example, a SELECT-statement.<br />Consequently, Parameter_SELECT has (currently) two properties, fields(list of Parameter_Field) and sources (list of Parameter_Source), which in turn uses other classes.<br />So when one calls the Parameter_SELECT.AsMySQL function, it loops through the objects in its fields and sources list and calls their .AsMySQL functions. This way, a MySQL specific SQL-representation of the Parameter_SELECT-object is generated.<br /><br />The parameters:<a href="http://www.menialtools.com/images/parameter.py"> parameter.py</a><br />Some tests: <a href="http://www.menialtools.com/images/parameter_tests.py">parameter_tests.py</a><br />Crude documentation: <a href="http://www.menialtools.com/images/Doc_20100613.pdf">Doc_20100613.pdf</a><br /><br />So far, most elements of typical SELECT statements are covered, like expressions and joins.<br />Stuff like ORDER BY, GROUP BY and HAVING aren't, but from now on, it should be much more easy since they merely reuse the concepts I have already defined.<br />I'd have to admit, maybe the structure looks pretty natural when you look at it, but it was really quite hard to generalise and simplify the structure while maintaining all the functionality of the language.<br /><br />Well. I suppose that was it.Zig007http://www.blogger.com/profile/15695733650499118900noreply@blogger.com0